How to Integrate LangGraph for pension funds with Redis for production AI
When you build AI agents for pension operations, you need two things working together: durable orchestration and fast state access. LangGraph gives you the control flow for multi-step pension workflows, while Redis gives you low-latency storage for session state, checkpoints, and shared context across workers.
This combo is useful when an agent needs to review contribution history, route a case to the right policy path, pause for human approval, and resume later without losing state. In production, that means fewer dropped workflows and a clean way to scale agent execution across multiple processes.
Prerequisites
- •Python 3.10+
- •A running Redis instance
- •Local example:
redis-server - •Or a managed Redis endpoint with TLS enabled
- •Local example:
- •Install the Python packages:
- •
langgraph - •
langchain-core - •
redis
- •
- •A basic understanding of:
- •LangGraph state graphs
- •Python async/sync functions
- •Redis key/value concepts
- •Environment variables configured:
- •
REDIS_URL=redis://localhost:6379/0
- •
Integration Steps
- •Install the dependencies
pip install langgraph langchain-core redis
- •Define your graph state and connect Redis as the checkpoint store
For production AI systems, don’t keep agent state in memory. Use Redis-backed checkpointing so your pension workflow can resume after retries, restarts, or human handoffs.
import os
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.redis import RedisSaver
class PensionState(TypedDict):
member_id: str
request_type: str
status: str
notes: str
def intake_node(state: PensionState) -> PensionState:
return {
**state,
"status": "intake_complete",
"notes": "Member request captured"
}
def review_node(state: PensionState) -> PensionState:
return {
**state,
"status": "review_complete",
"notes": state["notes"] + " | Eligibility reviewed"
}
builder = StateGraph(PensionState)
builder.add_node("intake", intake_node)
builder.add_node("review", review_node)
builder.add_edge(START, "intake")
builder.add_edge("intake", "review")
builder.add_edge("review", END)
redis_url = os.environ["REDIS_URL"]
checkpointer = RedisSaver.from_conn_string(redis_url)
app = builder.compile(checkpointer=checkpointer)
- •Store shared workflow metadata in Redis
Use Redis for more than checkpoints. It’s a good place to keep lightweight metadata like workflow versioning, feature flags, or per-member processing markers.
import redis
import os
r = redis.Redis.from_url(os.environ["REDIS_URL"], decode_responses=True)
def save_workflow_metadata(member_id: str, request_type: str) -> None:
key = f"pension:{member_id}:metadata"
r.hset(
key,
mapping={
"request_type": request_type,
"workflow_version": "v1",
"status": "queued"
}
)
def load_workflow_metadata(member_id: str) -> dict:
key = f"pension:{member_id}:metadata"
return r.hgetall(key)
save_workflow_metadata("M12345", "benefit_estimate")
print(load_workflow_metadata("M12345"))
- •Run the LangGraph workflow with a Redis-backed thread ID
The thread ID is what lets LangGraph resume the same pension case later. In practice, this is how you support long-running cases that may require manual review.
config = {
"configurable": {
"thread_id": "case-M12345"
}
}
initial_state = {
"member_id": "M12345",
"request_type": "benefit_estimate",
"status": "new",
"notes": ""
}
result = app.invoke(initial_state, config=config)
print(result)
- •Resume the same workflow after a pause
If a pension case needs human approval or an external system call, persist the checkpoint in Redis and continue later using the same thread ID.
paused_state = {
"member_id": "M12345",
"request_type": "benefit_estimate",
"status": "waiting_for_approval",
"notes": "Sent to operations queue"
}
app.invoke(paused_state, config=config)
# Later in another worker/process:
resumed = app.invoke(
{
"member_id": "M12345",
"request_type": "benefit_estimate",
"status": "approved",
"notes": ""
},
config=config
)
print(resumed)
Testing the Integration
Run a simple end-to-end check that writes metadata to Redis and executes the graph with checkpointing enabled.
test_member_id = "TEST-9001"
save_workflow_metadata(test_member_id, "contribution_query")
test_input = {
"member_id": test_member_id,
"request_type": "contribution_query",
"status": "new",
"notes": ""
}
test_config = {"configurable": {"thread_id": f"case-{test_member_id}"}}
output = app.invoke(test_input, config=test_config)
print("Redis metadata:", load_workflow_metadata(test_member_id))
print("Graph output:", output)
Expected output:
Redis metadata: {'request_type': 'contribution_query', 'workflow_version': 'v1', 'status': 'queued'}
Graph output: {'member_id': 'TEST-9001', 'request_type': 'contribution_query', 'status': 'review_complete', 'notes': 'Member request captured | Eligibility reviewed'}
Real-World Use Cases
- •
Pension benefit estimation agents
- •Track each member case through intake, validation, calculation prep, and approval.
- •Use Redis checkpoints so calculations can resume after policy checks or downstream API failures.
- •
Contribution exception handling
- •Route missing employer contributions through a multi-step investigation flow.
- •Keep case metadata in Redis so multiple workers can process queues without duplicating work.
- •
Member service copilots
- •Maintain short-lived conversation context for support agents handling retirement questions.
- •Use LangGraph for structured branching logic and Redis for session persistence across requests.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit