How to Integrate LangGraph for banking with Redis for production AI

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-bankingredisproduction-ai

Combining LangGraph for banking with Redis gives you a practical production pattern for regulated AI agents: LangGraph handles the stateful workflow and decision routing, while Redis gives you low-latency memory, session storage, and distributed coordination. For banking use cases, that means you can build agents that keep customer context across turns, recover from restarts, and enforce workflow checkpoints without turning your app into a pile of in-memory state.

Prerequisites

  • Python 3.10+
  • A running Redis instance
    • Local: redis-server
    • Or managed Redis like AWS ElastiCache / Azure Cache for Redis
  • A LangGraph banking project with your graph defined
  • Installed Python packages:
    • langgraph
    • redis
    • langchain-core if your graph uses standard message types
  • Environment variables configured:
    • REDIS_URL=redis://localhost:6379/0
    • Any model or bank API credentials your graph needs

Integration Steps

  1. Install dependencies and verify Redis connectivity.
pip install langgraph redis langchain-core
import os
import redis

redis_client = redis.from_url(os.environ["REDIS_URL"])
print(redis_client.ping())  # True

If this fails, fix Redis first. Don’t debug LangGraph until your cache layer is healthy.

  1. Define a banking state model for the graph.

Use a typed state so your workflow stays explicit. In banking systems, vague dicts become audit problems fast.

from typing import TypedDict, Annotated, List
from langgraph.graph.message import add_messages
from langchain_core.messages import BaseMessage

class BankingState(TypedDict):
    messages: Annotated[List[BaseMessage], add_messages]
    customer_id: str
    account_id: str
    intent: str
    risk_flag: bool

This state is what LangGraph will pass between nodes. Keep it small and deterministic.

  1. Create a Redis-backed checkpoint store for durable graph execution.

LangGraph supports checkpointing so you can persist thread state across requests. For production AI in banking, this is the difference between “agent forgot everything” and “agent resumed exactly where it left off.”

import os
from langgraph.checkpoint.redis import RedisSaver

redis_url = os.environ["REDIS_URL"]

checkpointer = RedisSaver.from_conn_string(redis_url)
checkpointer.setup()

If your version of LangGraph exposes a different Redis saver class, use the equivalent checkpointer implementation available in your installed package. The pattern stays the same: create a saver, initialize schema/state, then attach it to the graph compile step.

  1. Build the LangGraph workflow and compile it with Redis persistence.

Here’s a simple banking flow with an intake node and a decision node. The important part is .compile(checkpointer=...), which binds execution state to Redis.

from langgraph.graph import StateGraph, END

def classify_intent(state: BankingState):
    text = " ".join(msg.content for msg in state["messages"]).lower()
    if "transfer" in text or "send money" in text:
        return {"intent": "transfer", "risk_flag": False}
    if "card" in text or "fraud" in text:
        return {"intent": "card_issue", "risk_flag": True}
    return {"intent": "general_support", "risk_flag": False}

def route(state: BankingState):
    if state["risk_flag"]:
        return "manual_review"
    if state["intent"] == "transfer":
        return "transfer_flow"
    return END

def manual_review(state: BankingState):
    return {"messages": []}

def transfer_flow(state: BankingState):
    return {"messages": []}

builder = StateGraph(BankingState)
builder.add_node("classify_intent", classify_intent)
builder.add_node("manual_review", manual_review)
builder.add_node("transfer_flow", transfer_flow)

builder.set_entry_point("classify_intent")
builder.add_conditional_edges("classify_intent", route)
builder.add_edge("manual_review", END)
builder.add_edge("transfer_flow", END)

graph = builder.compile(checkpointer=checkpointer)

In production, this graph would usually call internal bank services from each node:

  • account lookup service
  • fraud scoring service
  • payments orchestration service

Redis keeps the thread checkpointed between those calls.

  1. Invoke the graph with a stable thread ID and persist conversation/session data in Redis.

Use configurable.thread_id so each customer session maps to one durable conversation thread. That lets you resume flows after retries or process restarts.

from langchain_core.messages import HumanMessage

result = graph.invoke(
    {
        "messages": [HumanMessage(content="I want to transfer $500 to my savings account")],
        "customer_id": "cust_123",
        "account_id": "acct_456",
        "intent": "",
        "risk_flag": False,
    },
    config={"configurable": {"thread_id": "bank-session-001"}}
)

print(result["intent"])
print(result["risk_flag"])

If you also want short-lived operational memory outside the graph, store it directly in Redis:

redis_client.setex(
    "bank-session-001:last_intent",
    3600,
    result["intent"]
)

print(redis_client.get("bank-session-001:last_intent").decode())

Testing the Integration

Run an end-to-end check by invoking the same thread twice and confirming Redis-backed continuity.

from langchain_core.messages import HumanMessage

thread_id = "test-thread-1001"

first = graph.invoke(
    {
        "messages": [HumanMessage(content="I need help with a card charge dispute")],
        "customer_id": "cust_999",
        "account_id": "acct_888",
        "intent": "",
        "risk_flag": False,
    },
    config={"configurable": {"thread_id": thread_id}}
)

second = graph.invoke(
    {
        "messages": [HumanMessage(content="Also show me my recent transfers")],
        "customer_id": "cust_999",
        "account_id": "acct_888",
        "intent": "",
        "risk_flag": False,
    },
    config={"configurable": {"thread_id": thread_id}}
)

print(first["intent"], first["risk_flag"])
print(second["intent"], second["risk_flag"])

Expected output:

card_issue True
general_support False

If your checkpointing is wired correctly, both invocations should succeed under the same thread ID without losing execution history or crashing on restart.

Real-World Use Cases

  • Customer support agent with auditability
    • Keep every interaction checkpointed in Redis while LangGraph routes disputes, balance questions, and payment issues through separate branches.
  • Fraud triage assistant
    • Use LangGraph to orchestrate review steps and Redis to store session context, temporary flags, and handoff metadata for analysts.
  • Payments workflow agent
    • Chain validation, limits checks, approvals, and retry logic in LangGraph while using Redis for idempotency keys and recovery after failures.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides