How to Integrate LangGraph for insurance with Redis for multi-agent systems

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-insuranceredismulti-agent-systems

Combining LangGraph for insurance with Redis gives you a clean way to run multi-agent insurance workflows with shared state, durable checkpoints, and fast cross-agent coordination. The practical win is simple: one agent can extract policy details, another can validate coverage, and Redis keeps the conversation and task state available across retries, worker restarts, and parallel execution.

Prerequisites

  • Python 3.10+
  • A Redis server running locally or via Redis Cloud
  • A LangGraph-based insurance workflow already defined, or at least a basic graph you can extend
  • langgraph, redis, and your LLM provider SDK installed
  • Environment variables configured:
    • REDIS_URL
    • OPENAI_API_KEY or the key for your model provider

Install the packages:

pip install langgraph redis langchain-openai

Integration Steps

  1. Create a Redis client for shared state and coordination

Use Redis for lightweight coordination between agents and as a persistence layer for checkpoints.

import os
import redis

redis_client = redis.Redis.from_url(
    os.environ["REDIS_URL"],
    decode_responses=True,
)

# Basic connectivity check
assert redis_client.ping() is True

# Shared metadata for an insurance case
redis_client.hset(
    "insurance:case:123",
    mapping={
        "policy_id": "POL-99821",
        "claim_id": "CLM-4412",
        "status": "received",
    },
)
  1. Define LangGraph nodes for insurance-specific tasks

A common pattern is one node per responsibility: intake, policy lookup, coverage validation, fraud screening, and final decisioning.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

class InsuranceState(TypedDict):
    claim_text: str
    extracted_fields: dict
    coverage_result: str
    decision: str

def extract_claim(state: InsuranceState):
    prompt = f"Extract structured claim fields from:\n{state['claim_text']}"
    response = llm.invoke(prompt)
    return {"extracted_fields": {"raw": response.content}}

def validate_coverage(state: InsuranceState):
    fields = state["extracted_fields"]
    # Replace with real policy service lookup
    if "accident" in state["claim_text"].lower():
        return {"coverage_result": "covered"}
    return {"coverage_result": "needs_review"}

def decide(state: InsuranceState):
    if state["coverage_result"] == "covered":
        return {"decision": "approve"}
    return {"decision": "route_to_adjuster"}

graph = StateGraph(InsuranceState)
graph.add_node("extract_claim", extract_claim)
graph.add_node("validate_coverage", validate_coverage)
graph.add_node("decide", decide)

graph.add_edge(START, "extract_claim")
graph.add_edge("extract_claim", "validate_coverage")
graph.add_edge("validate_coverage", "decide")
graph.add_edge("decide", END)
  1. Add Redis-backed checkpointing so agents can resume work

This is the part that makes the system production-friendly. LangGraph supports checkpointing through saver implementations; use Redis so each thread or case can resume after interruption.

from langgraph.checkpoint.redis import RedisSaver

checkpointer = RedisSaver.from_conn_string(os.environ["REDIS_URL"])
checkpointer.setup()

app = graph.compile(checkpointer=checkpointer)

If you’re running multiple agents against the same case, use a stable thread_id so the graph can load prior state from Redis.

config = {
    "configurable": {
        "thread_id": "claim-123"
    }
}

result = app.invoke(
    {"claim_text": "Customer reported an accident with rear-end damage."},
    config=config,
)

print(result)
  1. Use Redis for multi-agent handoff and task queues

For multi-agent systems, don’t force every agent to call every other agent directly. Use Redis lists or streams as a coordination layer.

import json

# Agent A writes a task for Agent B
redis_client.xadd(
    "insurance:tasks",
    {
        "type": "coverage_check",
        "case_id": "123",
        "payload": json.dumps({"policy_id": "POL-99821"}),
    },
)

# Agent B reads tasks from the stream
messages = redis_client.xread({"insurance:tasks": "$"}, block=1000, count=1)
print(messages)

You can map each stream event to a LangGraph node or even run separate graphs per agent.

  1. Persist final outcomes back into Redis

Keep the latest decision in Redis so downstream systems like claims portals or adjuster dashboards can consume it without querying the graph runtime.

redis_client.hset(
    "insurance:case:123",
    mapping={
        "status": result["decision"],
        "coverage_result": result["coverage_result"],
        "updated_by": "langgraph-workflow",
    },
)

print(redis_client.hgetall("insurance:case:123"))

Testing the Integration

Run a smoke test that exercises both checkpointing and Redis persistence.

test_state = {
    "claim_text": "The insured had an accident on I-95 with visible bumper damage."
}

output = app.invoke(test_state, config={"configurable": {"thread_id": "test-claim-001"}})

redis_client.hset(
    "insurance:test-claim-001",
    mapping={
        "decision": output["decision"],
        "coverage_result": output["coverage_result"],
    },
)

print("LangGraph output:", output)
print("Redis record:", redis_client.hgetall("insurance:test-claim-001"))

Expected output:

LangGraph output: {'claim_text': 'The insured had an accident on I-95 with visible bumper damage.', 'extracted_fields': {...}, 'coverage_result': 'covered', 'decision': 'approve'}
Redis record: {'decision': 'approve', 'coverage_result': 'covered'}

Real-World Use Cases

  • Claims triage pipeline

    • One agent extracts claim facts.
    • Another checks policy coverage.
    • A third routes low-risk claims for auto-approval while writing status updates to Redis.
  • Underwriting assistant

    • Agents split work across document review, risk scoring, and missing-info follow-up.
    • Redis stores per-applicant progress so underwriters can resume interrupted reviews.
  • Fraud investigation workflow

    • One graph flags anomalies.
    • Another agent enriches evidence from internal systems.
    • Redis streams coordinate investigator handoffs and keep an audit trail of task events.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides