How to Integrate LangGraph for lending with Redis for production AI
Combining LangGraph for lending with Redis gives you a practical production pattern for credit workflows that need state, speed, and recoverability. LangGraph handles the multi-step lending logic; Redis stores session state, checkpoints, and shared memory so your agent can resume cleanly across requests and workers.
Prerequisites
- •Python 3.10+
- •A running Redis instance
- •Local:
redis-server - •Managed: AWS ElastiCache, Azure Cache for Redis, or Upstash
- •Local:
- •Installed packages:
- •
langgraph - •
langchain-core - •
langchain-openaior your preferred model provider - •
redis
- •
- •An LLM API key configured in your environment
- •Basic familiarity with:
- •LangGraph
StateGraph - •Redis connection strings and key/value storage
- •LangGraph
Install dependencies:
pip install langgraph langchain-core langchain-openai redis
Integration Steps
- •Define the lending workflow state
For lending systems, keep state explicit. You want application data, credit decisions, and review flags to move through the graph in a controlled way.
from typing import TypedDict, Optional
class LendingState(TypedDict):
applicant_name: str
income: float
requested_amount: float
credit_score: int
risk_band: Optional[str]
decision: Optional[str]
- •Build the LangGraph workflow
Use StateGraph to model the lending flow. In this example, we classify risk and make a simple approve/deny decision.
from langgraph.graph import StateGraph, END
def assess_risk(state: LendingState) -> dict:
score = state["credit_score"]
income = state["income"]
amount = state["requested_amount"]
if score >= 720 and income >= amount * 3:
risk_band = "low"
elif score >= 650:
risk_band = "medium"
else:
risk_band = "high"
return {"risk_band": risk_band}
def decide(state: LendingState) -> dict:
if state["risk_band"] == "low":
decision = "approve"
elif state["risk_band"] == "medium":
decision = "manual_review"
else:
decision = "deny"
return {"decision": decision}
workflow = StateGraph(LendingState)
workflow.add_node("assess_risk", assess_risk)
workflow.add_node("decide", decide)
workflow.set_entry_point("assess_risk")
workflow.add_edge("assess_risk", "decide")
workflow.add_edge("decide", END)
- •Add Redis as the persistence layer
Redis is where production systems stop losing context between requests. With LangGraph, the common pattern is to use a checkpointer so graph execution can resume from saved state.
import os
import redis
from langgraph.checkpoint.redis import RedisSaver
redis_url = os.environ["REDIS_URL"] # e.g. redis://localhost:6379/0
r = redis.from_url(redis_url)
# Optional sanity check
assert r.ping() is True
checkpointer = RedisSaver.from_conn_string(redis_url)
app = workflow.compile(checkpointer=checkpointer)
- •Run the graph with a thread ID
In LangGraph, a thread_id identifies one lending case across multiple turns or steps. That thread ID becomes the anchor for Redis-backed persistence.
config = {
"configurable": {
"thread_id": "loan-app-10042"
}
}
initial_state = {
"applicant_name": "Amina Ndlovu",
"income": 120000,
"requested_amount": 25000,
"credit_score": 701,
}
result = app.invoke(initial_state, config=config)
print(result)
- •Store auxiliary application data in Redis
Use Redis for metadata that sits outside graph checkpoints: rate limits, idempotency keys, audit markers, or temporary application snapshots.
import json
application_key = f"lending:{config['configurable']['thread_id']}:metadata"
metadata = {
"channel": "mobile",
"region": "ZA",
"submitted_at": "2026-04-21T10:30:00Z",
}
r.set(application_key, json.dumps(metadata), ex=3600)
saved_metadata = json.loads(r.get(application_key))
print(saved_metadata)
Testing the Integration
Run a full pass and confirm both the graph output and Redis persistence behave as expected.
test_config = {
"configurable": {
"thread_id": "loan-app-test-1"
}
}
test_input = {
"applicant_name": "Brian Moyo",
"income": 90000,
"requested_amount": 15000,
"credit_score": 735,
}
output = app.invoke(test_input, config=test_config)
print("Decision:", output["decision"])
print("Risk band:", output["risk_band"])
redis_key = f"lending:{test_config['configurable']['thread_id']}:metadata"
r.set(redis_key, '{"status":"submitted"}')
print("Redis status:", r.get(redis_key).decode())
Expected output:
Decision: approve
Risk band: low
Redis status: {"status":"submitted"}
If you want to verify checkpointing specifically, invoke the same thread_id again with updated input and inspect whether LangGraph resumes from stored state instead of starting over.
Real-World Use Cases
- •
Loan pre-screening agents
- •Collect applicant data conversationally.
- •Persist partial answers in Redis.
- •Resume the LangGraph flow when the customer returns later.
- •
Underwriting review orchestration
- •Route low-risk cases automatically.
- •Send medium-risk cases to manual review.
- •Use Redis to coordinate reviewer queues and case locks.
- •
Document collection workflows
- •Track missing payslips, bank statements, or identity docs.
- •Store upload status in Redis.
- •Drive next-step prompts from LangGraph based on current completion state.
For production lending systems, this combo works because each tool does one job well. LangGraph manages deterministic workflow control; Redis gives you fast shared state and durable execution context across distributed workers.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit