How to Integrate LangGraph for fintech with Redis for startups
Combining LangGraph for fintech with Redis gives you a practical pattern for building stateful AI agents that can survive retries, restarts, and multi-step financial workflows. LangGraph handles the orchestration of agent logic, while Redis gives you fast shared state, checkpoints, and session memory for startup-grade systems that need low latency and durability.
Prerequisites
- •Python 3.10+
- •A Redis instance running locally or via managed service
- •A LangGraph project installed
- •An LLM provider configured for your agent nodes
- •Basic familiarity with Python async code
Install the packages:
pip install langgraph langchain-openai redis python-dotenv
Set your environment variables:
export OPENAI_API_KEY="your-key"
export REDIS_URL="redis://localhost:6379/0"
Integration Steps
- •Install and initialize Redis
Use Redis as the shared persistence layer for checkpoints and conversation/session state. For fintech workflows, this is useful when an underwriting flow or payment review needs to resume after a crash.
import os
import redis
redis_url = os.getenv("REDIS_URL", "redis://localhost:6379/0")
r = redis.Redis.from_url(redis_url, decode_responses=True)
# Basic connectivity check
print(r.ping()) # True
- •Create a LangGraph state model
Define the state your graph will carry across nodes. Keep it explicit; in fintech systems, hidden state becomes operational debt fast.
from typing import TypedDict, Annotated
from operator import add
class AgentState(TypedDict):
messages: Annotated[list, add]
customer_id: str
risk_score: float
decision: str
- •Build the graph with node functions
This example uses a simple risk-scoring flow. Replace the scoring logic with your own policy engine, rules service, or model call.
from langgraph.graph import StateGraph, START, END
def assess_risk(state: AgentState):
# Example heuristic for demo purposes
amount = float(state["messages"][-1].content)
score = 0.9 if amount > 10000 else 0.2
return {
"risk_score": score,
"decision": "manual_review" if score > 0.7 else "approve",
}
def finalize(state: AgentState):
return {
"messages": state["messages"],
"customer_id": state["customer_id"],
"risk_score": state["risk_score"],
"decision": state["decision"],
}
graph = StateGraph(AgentState)
graph.add_node("assess_risk", assess_risk)
graph.add_node("finalize", finalize)
graph.add_edge(START, "assess_risk")
graph.add_edge("assess_risk", "finalize")
graph.add_edge("finalize", END)
- •Add Redis-backed checkpointing to LangGraph
This is the part that makes the integration production-friendly. LangGraph supports checkpointing through checkpointer; use Redis so each run can resume from persisted state.
from langgraph.checkpoint.redis import RedisSaver
checkpointer = RedisSaver.from_conn_string(redis_url)
app = graph.compile(checkpointer=checkpointer)
If your LangGraph version exposes a different checkpoint package path, keep the same pattern: compile the graph with a Redis-backed checkpointer so thread state persists across invocations.
- •Invoke the graph with a stable thread ID
Use thread_id to isolate customer sessions or case files. In fintech, this maps cleanly to application IDs, KYC cases, dispute tickets, or loan workflows.
from langchain_core.messages import HumanMessage
config = {
"configurable": {
"thread_id": "customer-1234"
}
}
result = app.invoke(
{
"messages": [HumanMessage(content="15000")],
"customer_id": "customer-1234",
"risk_score": 0.0,
"decision": "",
},
config=config,
)
print(result["decision"])
print(result["risk_score"])
Testing the Integration
Run this end-to-end test to confirm both LangGraph execution and Redis persistence are working.
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "test-session-001"}}
first_run = app.invoke(
{
"messages": [HumanMessage(content="5000")],
"customer_id": "test-session-001",
"risk_score": 0.0,
"decision": "",
},
config=config,
)
second_run = app.invoke(
{
"messages": [HumanMessage(content="20000")],
"customer_id": "test-session-001",
"risk_score": 0.0,
"decision": "",
},
config=config,
)
print(first_run["decision"])
print(second_run["decision"])
Expected output:
approve
manual_review
If you inspect Redis after running it, you should see checkpoint data stored under keys created by the saver implementation.
Real-World Use Cases
- •
Loan pre-screening agents
Use LangGraph to orchestrate intake, policy checks, and escalation steps while Redis stores session context and partial results between steps. - •
Fraud triage workflows
Route transactions through multiple nodes for enrichment, anomaly scoring, and human review without losing progress if a worker restarts. - •
Customer support for banking apps
Keep conversation state in Redis while LangGraph manages tool calls for balance checks, card disputes, and account lock flows.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit