How to Integrate LangGraph for investment banking with Redis for startups
Combining LangGraph for investment banking with Redis gives you a practical pattern for stateful agent workflows that need speed, persistence, and retryability. In banking-style systems, you usually need multi-step reasoning, audit-friendly state, and low-latency memory across sessions. Redis fills the gap for short-term state, checkpoints, and coordination while LangGraph handles the graph-based orchestration.
Prerequisites
- •Python 3.10+
- •A running Redis instance:
- •local:
redis-server - •or managed Redis with host, port, password, TLS
- •local:
- •
pip install langgraph langchain-openai redis - •An OpenAI API key or another chat model provider supported by LangGraph
- •Basic familiarity with:
- •
StateGraph - •Redis key/value operations
- •Python typing with
TypedDict
- •
Integration Steps
- •Set up your Redis connection and LangGraph state schema.
import os
from typing import TypedDict, Annotated
from redis import Redis
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379/0")
redis_client = Redis.from_url(REDIS_URL, decode_responses=True)
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
customer_id: str
risk_score: float
This gives you two things:
- •a shared state object for the graph
- •a Redis client you can use for session memory and checkpoints
- •Create a node that reads and writes workflow context to Redis.
import json
def load_customer_context(state: AgentState) -> AgentState:
key = f"banking:customer:{state['customer_id']}:context"
cached = redis_client.get(key)
if cached:
context = json.loads(cached)
return {
**state,
"risk_score": context.get("risk_score", 0.0),
}
return {**state, "risk_score": 0.0}
def persist_customer_context(state: AgentState) -> AgentState:
key = f"banking:customer:{state['customer_id']}:context"
payload = {
"risk_score": state["risk_score"],
"message_count": len(state["messages"]),
}
redis_client.setex(key, 3600, json.dumps(payload))
return state
This is the simplest production pattern:
- •load cached customer context at the start
- •persist updated context after each run
- •use TTL so stale banking session data expires cleanly
- •Add a LangGraph node that performs banking-specific reasoning.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def analyze_investment_request(state: AgentState) -> AgentState:
prompt = (
"You are an investment banking assistant.\n"
f"Customer ID: {state['customer_id']}\n"
f"Current risk score: {state['risk_score']}\n"
"Review the latest user message and produce a revised risk score between 0 and 1."
)
response = llm.invoke(
[
{"role": "system", "content": prompt},
*state["messages"],
]
)
try:
new_score = float(response.content.strip())
except ValueError:
new_score = state["risk_score"]
return {**state, "risk_score": max(0.0, min(1.0, new_score))}
Use a deterministic model config here. Banking flows should not drift because of temperature-heavy outputs.
- •Wire the graph together with LangGraph’s
StateGraphAPI.
workflow = StateGraph(AgentState)
workflow.add_node("load_context", load_customer_context)
workflow.add_node("analyze", analyze_investment_request)
workflow.add_node("persist", persist_customer_context)
workflow.add_edge(START, "load_context")
workflow.add_edge("load_context", "analyze")
workflow.add_edge("analyze", "persist")
workflow.add_edge("persist", END)
app = workflow.compile()
At this point you have a runnable graph. The graph is responsible for orchestration; Redis handles persistence outside the execution path.
- •Run the workflow with real input and store execution metadata in Redis.
from langchain_core.messages import HumanMessage
input_state: AgentState = {
"messages": [HumanMessage(content="Assess whether this client profile is suitable for structured products.")],
"customer_id": "cust_10042",
"risk_score": 0.2,
}
result = app.invoke(input_state)
redis_client.hset(
f"banking:customer:{input_state['customer_id']}:audit",
mapping={
"last_risk_score": str(result["risk_score"]),
"last_run_status": "completed",
},
)
print(result["risk_score"])
This gives you both operational memory and audit metadata in Redis.
Testing the Integration
Use a quick smoke test to verify that the graph can read from Redis, run the analysis node, and persist output back to Redis.
test_customer_id = "cust_test_001"
redis_client.set(
f"banking:customer:{test_customer_id}:context",
'{"risk_score": 0.65}'
)
test_input: AgentState = {
"messages": [HumanMessage(content="Client wants exposure to higher-volatility assets.")],
"customer_id": test_customer_id,
"risk_score": 0.0,
}
output = app.invoke(test_input)
print("Risk score:", output["risk_score"])
print("Cached context:", redis_client.get(f"banking:customer:{test_customer_id}:context"))
Expected output:
Risk score: 0.xx
Cached context: {"risk_score": 0.xx,"message_count":1}
If you see both values update, your LangGraph workflow and Redis persistence are wired correctly.
Real-World Use Cases
- •
Client suitability screening
- •Cache customer profile data in Redis.
- •Use LangGraph nodes to route between KYC checks, risk scoring, and escalation.
- •
Deal intake triage
- •Store incoming deal metadata in Redis queues or hashes.
- •Use LangGraph to classify deal type, extract entities, and assign follow-up tasks.
- •
Session-aware analyst copilot
- •Keep recent conversation state in Redis.
- •Let LangGraph manage multi-step research flows across valuation, comps analysis, and summary generation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit