How to Integrate LangGraph for fintech with Redis for production AI
LangGraph for fintech gives you the orchestration layer for multi-step agent workflows. Redis gives you fast, durable state for checkpoints, session memory, rate limiting, and shared context across workers. Put them together and you get production AI agents that can survive restarts, scale horizontally, and keep financial workflows consistent.
Prerequisites
- •Python 3.10+
- •A Redis server running locally or in your VPC
- •A LangGraph-based agent project already set up
- •
langgraph,langchain-core, andredisinstalled - •Access to the fintech tools your graph will call:
- •payment APIs
- •KYC/AML services
- •ledger or account services
- •Environment variables configured:
- •
REDIS_URL - •any API keys needed by your agent tools
- •
Install the dependencies:
pip install langgraph langchain-core redis
Integration Steps
- •Create a Redis client and define your graph state
Use Redis as the shared backing store for agent checkpoints or session metadata. Start with a simple state model that tracks the user request and the latest tool result.
import os
from typing import TypedDict, Annotated
from redis import Redis
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
redis_client = Redis.from_url(os.environ["REDIS_URL"], decode_responses=True)
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
customer_id: str
risk_score: int
decision: str
- •Build LangGraph nodes that read/write operational data
In fintech, you usually need deterministic steps: fetch customer profile, score risk, decide action, then persist the outcome. Keep each node small and explicit.
def load_customer_context(state: AgentState):
customer_id = state["customer_id"]
cached = redis_client.get(f"customer:{customer_id}:profile")
if cached:
return {"decision": f"loaded_cached_profile:{cached}"}
profile = {"segment": "retail", "kyc_status": "verified"}
redis_client.setex(f"customer:{customer_id}:profile", 300, str(profile))
return {"decision": "profile_loaded"}
def risk_check(state: AgentState):
customer_id = state["customer_id"]
risk_key = f"customer:{customer_id}:risk_score"
cached_score = redis_client.get(risk_key)
if cached_score:
return {"risk_score": int(cached_score)}
score = 42
redis_client.setex(risk_key, 300, str(score))
return {"risk_score": score}
- •Assemble the LangGraph workflow
This is where LangGraph shines: you wire nodes into a controlled execution path instead of burying logic in one large agent loop.
workflow = StateGraph(AgentState)
workflow.add_node("load_customer_context", load_customer_context)
workflow.add_node("risk_check", risk_check)
workflow.set_entry_point("load_customer_context")
workflow.add_edge("load_customer_context", "risk_check")
workflow.add_edge("risk_check", END)
app = workflow.compile()
- •Add Redis-backed checkpointing for production runs
For real deployments, use a checkpointer so the graph can resume after failures or process restarts. LangGraph supports checkpointing via its persistence APIs; pair that with Redis so each thread or conversation stays recoverable.
from langgraph.checkpoint.redis import RedisSaver
checkpointer = RedisSaver(redis_client)
checkpointer.setup()
workflow = StateGraph(AgentState)
workflow.add_node("load_customer_context", load_customer_context)
workflow.add_node("risk_check", risk_check)
workflow.set_entry_point("load_customer_context")
workflow.add_edge("load_customer_context", "risk_check")
workflow.add_edge("risk_check", END)
app = workflow.compile(checkpointer=checkpointer)
If you’re using a newer LangGraph setup with configurable thread IDs, pass a stable identifier per customer or case so checkpoints map cleanly to one workflow instance.
config = {"configurable": {"thread_id": "case_12345"}}
result = app.invoke(
{
"messages": [],
"customer_id": "cust_001",
"risk_score": 0,
"decision": ""
},
config=config,
)
print(result)
- •Use Redis for shared coordination across workers
Redis is not just storage. It is also a coordination layer for distributed agents handling the same customer queue. Use it for locks, deduplication keys, and idempotency.
import time
def acquire_case_lock(case_id: str) -> bool:
lock_key = f"lock:case:{case_id}"
return redis_client.set(lock_key, str(time.time()), nx=True, ex=60)
def release_case_lock(case_id: str):
redis_client.delete(f"lock:case:{case_id}")
That pattern prevents two workers from approving the same transfer or re-running a KYC decision at the same time.
Testing the Integration
Run a minimal invocation and confirm both the graph and Redis are working.
test_input = {
"messages": [],
"customer_id": "cust_001",
"risk_score": 0,
"decision": ""
}
config = {"configurable": {"thread_id": "test-thread-001"}}
output = app.invoke(test_input, config=config)
print("Output:", output)
print("Cached profile:", redis_client.get("customer:cust_001:profile"))
print("Cached risk score:", redis_client.get("customer:cust_001:risk_score"))
Expected output:
Output: {'messages': [], 'customer_id': 'cust_001', 'risk_score': 42, 'decision': 'profile_loaded'}
Cached profile: {'segment': 'retail', 'kyc_status': 'verified'}
Cached risk score: 42
If output returns data but Redis keys are missing, your graph is running but persistence is not wired correctly. If both are missing, check REDIS_URL, network access, and whether RedisSaver.setup() ran successfully.
Real-World Use Cases
- •
Fraud triage agents
- •Store recent device fingerprints and transaction flags in Redis.
- •Use LangGraph to route between approve, hold, or escalate steps.
- •
KYC onboarding assistants
- •Keep onboarding session state in Redis.
- •Use LangGraph to orchestrate document checks, sanctions screening, and manual review handoff.
- •
Payment exception handling
- •Persist retry counters and idempotency keys in Redis.
- •Use LangGraph to manage reconciliation steps after failed transfers or webhook delays.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit