How to Integrate LangGraph for insurance with Redis for startups

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-insuranceredisstartups

Combining LangGraph for insurance with Redis gives you a practical control plane for agentic insurance workflows. LangGraph handles the stateful orchestration across underwriting, claims, and policy servicing, while Redis gives you fast persistence for checkpoints, session state, and short-lived memory.

For startups, this is the difference between a demo agent and a system that can survive retries, handoffs, and partial failures without losing context.

Prerequisites

  • Python 3.10+
  • A Redis instance running locally or in the cloud
  • An OpenAI-compatible model key or another LLM provider supported by your LangGraph stack
  • langgraph, langchain, redis, and python-dotenv installed
  • A basic understanding of:
    • graph nodes and edges in LangGraph
    • Redis key/value storage
    • async Python

Install the packages:

pip install langgraph langchain redis python-dotenv

Set your environment variables:

export REDIS_URL="redis://localhost:6379/0"
export OPENAI_API_KEY="your-key"

Integration Steps

1) Connect to Redis and define a checkpoint store

LangGraph can persist graph state through checkpointing. For insurance workflows, that matters because claims and underwriting often span multiple turns and external tool calls.

Use Redis as the backing store for checkpoints so each conversation or case can resume cleanly.

import os
from redis import Redis
from langgraph.checkpoint.redis import RedisSaver

redis_url = os.getenv("REDIS_URL", "redis://localhost:6379/0")

# Raw Redis client for any custom reads/writes you need later
redis_client = Redis.from_url(redis_url, decode_responses=True)

# LangGraph checkpoint saver backed by Redis
checkpointer = RedisSaver.from_conn_string(redis_url)
checkpointer.setup()

2) Define an insurance workflow state

Keep the state explicit. In insurance systems, hidden state is how you lose auditability.

This example tracks a simple claim triage flow: intake, classification, and next action.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class ClaimState(TypedDict):
    messages: Annotated[list, add_messages]
    claim_type: str
    severity: str
    next_action: str

3) Build LangGraph nodes for claim triage

Each node should do one job. For startups, that keeps debugging manageable when the workflow grows from claims triage to FNOL, fraud checks, and adjuster routing.

Below is a minimal but production-shaped graph with deterministic outputs.

def classify_claim(state: ClaimState):
    text = " ".join([m.content for m in state["messages"]]).lower()

    if "collision" in text or "accident" in text:
        claim_type = "auto"
    elif "water" in text or "leak" in text:
        claim_type = "property"
    else:
        claim_type = "general"

    return {"claim_type": claim_type}

def assess_severity(state: ClaimState):
    text = " ".join([m.content for m in state["messages"]]).lower()

    if any(word in text for word in ["injury", "hospital", "fire", "flood"]):
        severity = "high"
        next_action = "escalate_to_human_adjuster"
    else:
        severity = "low"
        next_action = "continue_automated_review"

    return {"severity": severity, "next_action": next_action}

graph = StateGraph(ClaimState)
graph.add_node("classify_claim", classify_claim)
graph.add_node("assess_severity", assess_severity)

graph.add_edge(START, "classify_claim")
graph.add_edge("classify_claim", "assess_severity")
graph.add_edge("assess_severity", END)

app = graph.compile(checkpointer=checkpointer)

4) Persist session context in Redis and invoke the graph

Use a stable thread_id per policyholder case or claim ID. That gives you resumable conversations across web requests and worker retries.

You can also store lightweight metadata in Redis alongside the graph checkpoint.

from langchain_core.messages import HumanMessage

claim_id = "claim_100245"
thread_id = f"insurance:{claim_id}"

# Optional custom metadata stored directly in Redis
redis_client.hset(
    f"case:{claim_id}",
    mapping={
        "status": "open",
        "assigned_team": "claims-intake",
    },
)

result = app.invoke(
    {"messages": [HumanMessage(content="Customer reports a water leak causing ceiling damage.")],
     "claim_type": "",
     "severity": "",
     "next_action": ""},
    config={"configurable": {"thread_id": thread_id}},
)

print(result)

5) Resume the same workflow from Redis-backed state

This is where the integration pays off. If your API crashes after intake but before routing, you can fetch the same thread later and continue from the saved checkpoint.

resumed = app.invoke(
    {"messages": [HumanMessage(content="There is visible mold and damaged drywall.")],
     "claim_type": "",
     "severity": "",
     "next_action": ""},
    config={"configurable": {"thread_id": thread_id}},
)

print(resumed["claim_type"])
print(resumed["severity"])
print(resumed["next_action"])

Testing the Integration

Run this script to verify both LangGraph checkpointing and Redis persistence work together.

from langchain_core.messages import HumanMessage

test_thread_id = "insurance:test-001"

output = app.invoke(
    {
        "messages": [HumanMessage(content="I had a car accident with minor damage.")],
        "claim_type": "",
        "severity": "",
        "next_action": "",
    },
    config={"configurable": {"thread_id": test_thread_id}},
)

print("Claim type:", output["claim_type"])
print("Severity:", output["severity"])
print("Next action:", output["next_action"])

stored_case = redis_client.hgetall("case:claim_100245")
print("Redis case record:", stored_case)

Expected output:

Claim type: auto
Severity: low
Next action: continue_automated_review
Redis case record: {'status': 'open', 'assigned_team': 'claims-intake'}

Real-World Use Cases

  • Claims intake assistant

    • Triage FNOL submissions into auto/property/life buckets.
    • Store conversation checkpoints so customers can resume after document upload delays.
  • Underwriting pre-screening

    • Collect applicant answers across multiple turns.
    • Cache eligibility flags and risk notes in Redis for downstream pricing services.
  • Adjuster copilot

    • Keep case context across document review, photo analysis, and follow-up questions.
    • Use Redis to share short-lived case summaries between worker processes.

If you are building this for a startup, keep the graph small first. Add human escalation paths early, persist every meaningful step to Redis-backed checkpoints, and treat thread IDs as first-class business identifiers.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides