How to Integrate LangGraph for retail banking with Redis for AI agents
Combining LangGraph for retail banking with Redis gives you a clean way to build AI agents that can reason over banking workflows while keeping short-term state, session context, and retrieval data fast and durable. In practice, this is what lets you build agents that can handle customer servicing, fraud triage, and account inquiry flows without losing conversation state between turns.
Prerequisites
- •Python 3.10+
- •A running Redis instance
- •Local:
redis-server - •Or managed Redis with a connection URL
- •Local:
- •Installed packages:
- •
langgraph - •
langchain-core - •
langchain-openaior your preferred model provider - •
redis
- •
- •An LLM API key configured in your environment
- •Basic familiarity with:
- •LangGraph nodes, edges, and state
- •Redis key/value operations and TTLs
Install the dependencies:
pip install langgraph langchain-core langchain-openai redis
Integration Steps
- •
Define the banking workflow state
Start by modeling the agent state you need for retail banking. Keep it explicit: customer ID, intent, risk flags, and any retrieved account context.
from typing import TypedDict, Annotated
from operator import add
class BankingState(TypedDict):
customer_id: str
intent: str
messages: Annotated[list, add]
risk_flag: bool
account_summary: str
- •
Connect to Redis for session storage
Use Redis to persist per-customer session data and lightweight context between graph runs. For production, use namespaced keys and TTLs so stale sessions expire automatically.
import redis
import json
r = redis.Redis(
host="localhost",
port=6379,
db=0,
decode_responses=True,
)
def save_session(customer_id: str, payload: dict) -> None:
key = f"banking:session:{customer_id}"
r.setex(key, 3600, json.dumps(payload))
def load_session(customer_id: str) -> dict | None:
key = f"banking:session:{customer_id}"
raw = r.get(key)
return json.loads(raw) if raw else None
- •
Build LangGraph nodes that read from and write to Redis
Here’s the pattern that matters: LangGraph handles orchestration, while Redis stores the durable session snapshot. In retail banking, this lets you resume a case after a timeout or handoff without rebuilding context from scratch.
from langgraph.graph import StateGraph, END
def classify_intent(state: BankingState) -> BankingState:
text = " ".join(state["messages"]).lower()
if "balance" in text or "statement" in text:
intent = "account_inquiry"
risk_flag = False
account_summary = "Customer asked for balance-related information."
elif "transfer" in text or "send money" in text:
intent = "payments"
risk_flag = True
account_summary = "Customer initiated a funds movement request."
else:
intent = "general_service"
risk_flag = False
account_summary = "General servicing request."
updated = {
**state,
"intent": intent,
"risk_flag": risk_flag,
"account_summary": account_summary,
}
save_session(state["customer_id"], updated)
return updated
def enrich_from_redis(state: BankingState) -> BankingState:
cached = load_session(state["customer_id"])
if not cached:
return state
return {**state, **cached}
graph_builder = StateGraph(BankingState)
graph_builder.add_node("enrich_from_redis", enrich_from_redis)
graph_builder.add_node("classify_intent", classify_intent)
graph_builder.set_entry_point("enrich_from_redis")
graph_builder.add_edge("enrich_from_redis", "classify_intent")
graph_builder.add_edge("classify_intent", END)
app = graph_builder.compile()
- •
Add an LLM-backed decision node for banking responses
If you want the agent to draft responses or summarize cases, wire in an LLM node. The important part is that LangGraph controls when the model runs; Redis keeps the state needed for retries and follow-up turns.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def draft_response(state: BankingState) -> BankingState:
prompt = f"""
You are a retail banking assistant.
Customer ID: {state['customer_id']}
Intent: {state['intent']}
Risk flag: {state['risk_flag']}
Context: {state['account_summary']}
Write a concise next-step response.
"""
result = llm.invoke(prompt)
updated_messages = state["messages"] + [result.content]
updated_state = {**state, "messages": updated_messages}
save_session(state["customer_id"], updated_state)
return updated_state
graph_builder.add_node("draft_response", draft_response)
graph_builder.add_edge("classify_intent", "draft_response")
- •
Run the graph with a real customer session
Pass in the customer session data once. On later turns, Redis will restore prior context before classification and response drafting.
input_state: BankingState = {
"customer_id": "cust-10021",
"intent": "",
"messages": ["I want to check my balance"],
"risk_flag": False,
"account_summary": "",
}
output_state = app.invoke(input_state)
print(output_state["intent"])
print(output_state["messages"][-1])
Testing the Integration
Use a simple end-to-end check that confirms Redis persistence and LangGraph execution are both working.
test_state: BankingState = {
"customer_id": "cust-20045",
"intent": "",
"messages": ["Can I transfer money to another account?"],
"risk_flag": False,
"account_summary": "",
}
result = app.invoke(test_state)
print("Intent:", result["intent"])
print("Risk flag:", result["risk_flag"])
print("Cached session exists:", bool(load_session("cust-20045")))
Expected output:
Intent: payments
Risk flag: True
Cached session exists: True
If that passes, your graph is executing correctly and Redis is persisting the agent’s banking session state.
Real-World Use Cases
- •
Retail banking service agent
- •Handle balance checks, statement requests, card issues, and payment questions while preserving customer context across turns.
- •
Fraud triage assistant
- •Route suspicious payment flows through a risk-aware LangGraph branch and store investigation state in Redis for analyst handoff.
- •
Loan servicing workflow
- •Keep application progress, missing-document flags, and customer follow-up notes in Redis while LangGraph orchestrates each step of the servicing journey.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit