How to Integrate LangGraph for wealth management with Redis for startups
Combining LangGraph for wealth management with Redis gives you a practical agent stack for startup-grade financial workflows: stateful decision flows plus fast, durable memory. That means you can build assistants that track portfolio context, cache market data, persist user preferences, and recover cleanly after restarts without rebuilding the conversation from scratch.
Prerequisites
- •Python 3.10+
- •A Redis instance running locally or in the cloud
- •
langgraph,langchain-core,redis, and your LLM provider SDK installed - •API key configured for your model provider, if your graph uses one
- •Basic familiarity with LangGraph state graphs and Redis key/value operations
Install the core packages:
pip install langgraph langchain-core redis python-dotenv
Integration Steps
- •
Set up Redis and environment variables
Keep connection settings out of code. For startups, this makes local dev and production deployment consistent.
import os
from dotenv import load_dotenv
load_dotenv()
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379/0")
- •
Create a Redis client for state and cache storage
Use Redis for short-lived session data like user risk profile, latest holdings snapshot, or market lookup cache.
import redis
redis_client = redis.from_url(REDIS_URL, decode_responses=True)
# Simple connectivity check
print(redis_client.ping())
- •
Define a LangGraph state model for wealth management
In a wealth workflow, your graph usually needs structured state: user profile, portfolio summary, goals, and final recommendation.
from typing import TypedDict, Optional
class WealthState(TypedDict):
user_id: str
risk_profile: Optional[str]
portfolio_value: Optional[float]
goal: Optional[str]
recommendation: Optional[str]
- •
Build a LangGraph workflow and wire Redis into the nodes
The graph can read cached context from Redis before generating advice, then write the result back for later reuse.
from langgraph.graph import StateGraph, END
def load_user_context(state: WealthState) -> WealthState:
key = f"user:{state['user_id']}:profile"
cached_profile = redis_client.get(key)
if cached_profile:
state["risk_profile"] = cached_profile
return state
def generate_recommendation(state: WealthState) -> WealthState:
risk = state.get("risk_profile", "moderate")
goal = state.get("goal", "long-term growth")
if risk == "conservative":
rec = f"Recommend a bond-heavy allocation aligned to {goal}."
elif risk == "aggressive":
rec = f"Recommend higher equity exposure aligned to {goal}."
else:
rec = f"Recommend a balanced portfolio aligned to {goal}."
state["recommendation"] = rec
return state
def persist_result(state: WealthState) -> WealthState:
key = f"user:{state['user_id']}:latest_recommendation"
redis_client.set(key, state["recommendation"])
return state
graph_builder = StateGraph(WealthState)
graph_builder.add_node("load_user_context", load_user_context)
graph_builder.add_node("generate_recommendation", generate_recommendation)
graph_builder.add_node("persist_result", persist_result)
graph_builder.set_entry_point("load_user_context")
graph_builder.add_edge("load_user_context", "generate_recommendation")
graph_builder.add_edge("generate_recommendation", "persist_result")
graph_builder.add_edge("persist_result", END)
wealth_graph = graph_builder.compile()
- •
Run the graph with Redis-backed context
Seed Redis with user data, invoke the graph, then inspect both returned output and stored values.
redis_client.set("user:123:profile", "conservative")
result = wealth_graph.invoke(
{
"user_id": "123",
"risk_profile": None,
"portfolio_value": 250000.0,
"goal": "retirement planning",
"recommendation": None,
}
)
print(result["recommendation"])
print(redis_client.get("user:123:latest_recommendation"))
Testing the Integration
Use a small integration test that verifies both the LangGraph flow and Redis persistence.
def test_wealth_graph_integration():
redis_client.set("user:456:profile", "aggressive")
output = wealth_graph.invoke(
{
"user_id": "456",
"risk_profile": None,
"portfolio_value": 1000000.0,
"goal": "capital appreciation",
"recommendation": None,
}
)
cached = redis_client.get("user:456:latest_recommendation")
assert output["risk_profile"] == "aggressive"
assert output["recommendation"] is not None
assert cached == output["recommendation"]
test_wealth_graph_integration()
print("Integration test passed.")
Expected output:
Recommend higher equity exposure aligned to capital appreciation.
Recommend higher equity exposure aligned to capital appreciation.
Integration test passed.
Real-World Use Cases
- •
Client onboarding assistant
- •Collects risk tolerance once, stores it in Redis, and reuses it across multiple LangGraph steps for suitability checks and portfolio suggestions.
- •
Portfolio review copilot
- •Pulls cached holdings and prior recommendations from Redis so the graph can generate faster review summaries during advisor meetings.
- •
Market-aware recommendation engine
- •Caches market snapshots in Redis and lets LangGraph route between rebalancing logic, tax-loss harvesting logic, or conservative hold recommendations based on current conditions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit