How to Integrate LangGraph for banking with Redis for AI agents
Combining LangGraph for banking with Redis gives you a clean way to run stateful AI agents that can survive restarts, coordinate across workers, and keep short-term memory fast. In banking workflows, that matters for things like case triage, fraud review, payment exception handling, and customer support where you need durable state plus low-latency shared context.
Prerequisites
- •Python 3.10+
- •A Redis instance running locally or in your environment
- •
langgraphinstalled - •
redisPython client installed - •Access to your banking tools or APIs that the graph will call
- •A clear state model for the agent, including what should be persisted in Redis
Install the packages:
pip install langgraph redis
If you are using LangGraph with a bank-specific agent stack, make sure your graph already has:
- •nodes for policy lookup, account lookup, transaction review, or escalation
- •a state schema that can be serialized
- •a Redis connection string ready for persistence or shared memory
Integration Steps
1) Define the agent state and connect to Redis
Start by defining the state your banking agent needs to carry between nodes. Then create a Redis client that will store checkpoints or session data.
from typing import TypedDict, List
from redis import Redis
class BankingAgentState(TypedDict):
customer_id: str
messages: List[str]
risk_score: float
decision: str
redis_client = Redis.from_url("redis://localhost:6379/0", decode_responses=True)
# quick connectivity check
print(redis_client.ping())
For production, point this at managed Redis and use TLS/auth. Keep the state small; store only what the graph needs to resume work.
2) Build LangGraph nodes for banking actions
Use LangGraph’s StateGraph to define node functions. Each node reads and updates the state.
from langgraph.graph import StateGraph, END
def fetch_customer_context(state: BankingAgentState):
# Replace with real bank API call
state["messages"].append(f"Loaded profile for {state['customer_id']}")
return state
def assess_risk(state: BankingAgentState):
# Replace with actual risk model output
state["risk_score"] = 0.82
state["messages"].append("Risk score computed")
return state
def decide_action(state: BankingAgentState):
if state["risk_score"] > 0.8:
state["decision"] = "escalate_to_analyst"
else:
state["decision"] = "auto_approve"
return state
graph = StateGraph(BankingAgentState)
graph.add_node("fetch_customer_context", fetch_customer_context)
graph.add_node("assess_risk", assess_risk)
graph.add_node("decide_action", decide_action)
graph.set_entry_point("fetch_customer_context")
graph.add_edge("fetch_customer_context", "assess_risk")
graph.add_edge("assess_risk", "decide_action")
graph.add_edge("decide_action", END)
This keeps the business logic explicit. In regulated workflows, that’s better than burying decisions inside one large agent loop.
3) Add Redis-backed checkpointing for durable execution
LangGraph supports checkpointing so you can persist graph progress and resume later. Use a Redis-backed checkpointer if your version exposes it in your stack; otherwise wire Redis as your own storage layer around graph runs.
from langgraph.checkpoint.redis import RedisSaver
checkpointer = RedisSaver(redis_client)
app = graph.compile(checkpointer=checkpointer)
If your package layout differs by version, keep the pattern the same:
- •compile the graph with a checkpointer
- •pass a stable thread/session identifier when invoking it
- •use Redis as the backing store for checkpoints
That gives you resumable runs when an analyst session drops or a worker restarts.
4) Run the graph with a stable thread ID and persist session data in Redis
Use thread_id so every turn maps to one banking case. You can also store lightweight metadata directly in Redis for fast retrieval by other services.
initial_state = {
"customer_id": "CUST-10021",
"messages": [],
"risk_score": 0.0,
"decision": ""
}
config = {
"configurable": {
"thread_id": "case-88421"
}
}
result = app.invoke(initial_state, config=config)
redis_client.hset(
"banking:case:88421",
mapping={
"customer_id": result["customer_id"],
"decision": result["decision"],
"risk_score": str(result["risk_score"])
}
)
print(result)
This pattern separates concerns:
- •LangGraph manages workflow execution and branching.
- •Redis stores session metadata and optional checkpoints.
- •Other services can read from Redis without calling the graph again.
5) Retrieve context on follow-up turns
When an analyst or customer sends a follow-up message, load prior case data from Redis before invoking the graph again.
case_data = redis_client.hgetall("banking:case:88421")
follow_up_state = {
"customer_id": case_data["customer_id"],
"messages": ["Customer asked to re-check hold reason"],
"risk_score": float(case_data["risk_score"]),
"decision": case_data["decision"]
}
follow_up_result = app.invoke(
follow_up_state,
config={"configurable": {"thread_id": "case-88421"}}
)
print(follow_up_result)
That gives you continuity across turns without stuffing everything into prompt history. In banking, that matters because auditability beats raw conversational memory.
Testing the Integration
Run a simple smoke test to confirm both systems are wired correctly.
test_key = "integration:test"
redis_client.set(test_key, "ok")
state = {
"customer_id": "TEST-1",
"messages": [],
"risk_score": 0.0,
"decision": ""
}
output = app.invoke(state, config={"configurable": {"thread_id": "test-thread"}})
print(redis_client.get(test_key))
print(output["decision"])
print(output["messages"])
Expected output:
ok
escalate_to_analyst
['Loaded profile for TEST-1', 'Risk score computed']
If you get no checkpoint persistence:
- •confirm your
thread_idis stable across invocations - •verify Redis is reachable with
PING - •check that your LangGraph version includes the Redis saver class you imported
Real-World Use Cases
- •
Fraud review assistants
Keep case context in LangGraph while using Redis to share queue status across multiple reviewer workers. - •
Payment exception handling
Route failed transfers through branching steps like validation, enrichment, policy checks, and escalation with resumable execution. - •
Customer support copilots
Store short-lived conversation context in Redis so agents can pick up where they left off without rebuilding history on every request.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit