How to Integrate LangGraph for insurance with Redis for production AI
Combining LangGraph for insurance with Redis gives you a practical production pattern: keep your agent workflows stateful, fast, and recoverable. For insurance systems, that usually means claim triage, policy Q&A, FNOL intake, and underwriting checks that need short-term memory, durable checkpoints, and low-latency retrieval.
Prerequisites
- •Python 3.10+
- •A running Redis instance
- •Local:
redis-server - •Managed: Redis Cloud or AWS ElastiCache
- •Local:
- •Installed packages:
- •
langgraph - •
langchain-core - •
redis - •
langchain-redisif you want Redis-backed vector/search patterns
- •
- •An API key for your model provider if your graph uses an LLM node
- •Basic familiarity with:
- •LangGraph state graphs
- •Redis key/value operations
- •Insurance domain workflows like claims, underwriting, and policy servicing
Integration Steps
- •
Install the dependencies
Keep the stack explicit. Don’t mix graph orchestration and persistence concerns in the same layer.
pip install langgraph langchain-core redis langchain-redis - •
Connect to Redis and define your graph state
In production, Redis should store session state or checkpoints keyed by claim ID, policy ID, or conversation ID.
from typing import TypedDict, Annotated from redis import Redis from langgraph.graph import StateGraph, START, END from langgraph.graph.message import add_messages class InsuranceState(TypedDict): messages: Annotated[list, add_messages] claim_id: str policy_number: str risk_score: int redis_client = Redis( host="localhost", port=6379, decode_responses=True, ) - •
Build a LangGraph workflow and persist workflow metadata in Redis
This example creates a simple insurance triage graph. The node writes a compact audit trail to Redis so you can inspect decisions later.
from langchain_core.messages import HumanMessage, AIMessage def assess_claim(state: InsuranceState): text = state["messages"][-1].content.lower() risk_score = 90 if any(word in text for word in ["injury", "fire", "fraud"]) else 30 redis_client.hset( f"claim:{state['claim_id']}", mapping={ "policy_number": state["policy_number"], "risk_score": str(risk_score), "last_message": text, }, ) return { "risk_score": risk_score, "messages": [AIMessage(content=f"Claim triage complete. Risk score={risk_score}")] } graph = StateGraph(InsuranceState) graph.add_node("assess_claim", assess_claim) graph.add_edge(START, "assess_claim") graph.add_edge("assess_claim", END) app = graph.compile() - •
Add durable checkpointing with Redis
This is the part that matters in production. If the agent crashes mid-flow, you want to resume from the last valid state instead of restarting the whole claim conversation.
Use a Redis-backed checkpointer when compiling the graph.
from langgraph.checkpoint.redis import RedisSaver checkpointer = RedisSaver.from_conn_string("redis://localhost:6379/0") app = graph.compile(checkpointer=checkpointer) initial_state = { "messages": [HumanMessage(content="Customer reports fire damage in kitchen.")], "claim_id": "CLM-10021", "policy_number": "POL-77881", "risk_score": 0, } config = {"configurable": {"thread_id": "thread-clm-10021"}} result = app.invoke(initial_state, config=config) print(result["risk_score"]) - •
Use Redis as a fast retrieval layer for policy context
For insurance agents, you often need policy clauses or underwriting rules during the run. Store those documents in Redis and retrieve them before or during graph execution.
from langchain_redis import RedisVectorStore from langchain_core.documents import Document # Example assumes embeddings are already configured elsewhere. # embeddings = OpenAIEmbeddings() vector_store = RedisVectorStore( redis_url="redis://localhost:6379/0", index_name="insurance_policy_index", embedding=None, ) vector_store.add_documents([ Document(page_content="Water damage is covered only if sudden and accidental."), Document(page_content="Fire damage requires immediate adjuster review."), ]) docs = vector_store.similarity_search("Is fire damage covered?", k=1) print(docs[0].page_content)
Testing the Integration
Run a full invoke against the compiled graph and verify both the returned state and the Redis hash.
from langchain_core.messages import HumanMessage
test_input = {
"messages": [HumanMessage(content="The customer says there was smoke damage after an electrical fire.")],
"claim_id": "CLM-20001",
"policy_number": "POL-90011",
"risk_score": 0,
}
config = {"configurable": {"thread_id": "thread-clm-20001"}}
output = app.invoke(test_input, config=config)
print("Returned risk score:", output["risk_score"])
print("Redis record:", redis_client.hgetall("claim:CLM-20001"))
Expected output:
Returned risk score: 90
Redis record: {'policy_number': 'POL-90011', 'risk_score': '90', 'last_message': 'the customer says there was smoke damage after an electrical fire.'}
Real-World Use Cases
- •Claims triage agent
- •Classify severity, store audit trails in Redis, and resume interrupted conversations without losing context.
- •Policy servicing assistant
- •Retrieve policy clauses from Redis-backed search while LangGraph orchestrates multi-step answer generation.
- •Underwriting pre-checks
- •Cache applicant attributes and decision signals in Redis so repeated evaluations stay fast across sessions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit