How to Integrate LangGraph for insurance with Redis for RAG

By Cyprian AaronsUpdated 2026-04-21
langgraph-for-insuranceredisrag

Combining LangGraph for insurance with Redis gives you a clean pattern for production RAG agents: LangGraph handles the workflow, branching, and state, while Redis stores fast retrieval data and session context. In an insurance stack, that means you can route claims, policy Q&A, underwriting checks, and document lookup through one controlled graph without turning every request into a one-off prompt chain.

Prerequisites

  • Python 3.10+
  • A running Redis instance
    • Local: redis-server
    • Or managed Redis like AWS ElastiCache / Azure Cache for Redis
  • Installed packages:
    • langgraph
    • langchain
    • langchain-redis
    • redis
    • openai or another LLM provider used by your graph
  • An embedding model configured for your RAG pipeline
  • Access to insurance documents:
    • policy PDFs
    • claims manuals
    • underwriting guidelines
    • customer FAQ content

Install the core dependencies:

pip install langgraph langchain langchain-redis redis openai

Integration Steps

  1. Connect to Redis and create a vector store

Use Redis as the backing store for embeddings and retrieval. This is the part that turns your insurance corpus into searchable context for the graph.

import os
from redis import Redis
from langchain_redis import RedisVectorStore
from langchain_openai import OpenAIEmbeddings

REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379")
INDEX_NAME = "insurance-rag"

client = Redis.from_url(REDIS_URL)
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")

vectorstore = RedisVectorStore(
    redis_client=client,
    index_name=INDEX_NAME,
    embedding=embeddings,
)
  1. Load insurance documents into Redis

Chunk the source material before indexing. For insurance use cases, keep chunks aligned to sections like exclusions, coverage limits, or claim requirements.

from langchain_core.documents import Document

docs = [
    Document(
        page_content="Accidental damage is covered up to $5,000 per claim.",
        metadata={"source": "policy_2024.pdf", "section": "coverage"},
    ),
    Document(
        page_content="Claims must be filed within 30 days of the incident.",
        metadata={"source": "claims_manual.pdf", "section": "filing_rules"},
    ),
]

vectorstore.add_documents(docs)
  1. Build a LangGraph workflow that retrieves from Redis

LangGraph gives you explicit control over the agent flow. Here we define a retrieval node that queries Redis and passes context into the generation step.

from typing import TypedDict, List
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.graph import StateGraph, START, END

class State(TypedDict):
    question: str
    context: List[str]
    answer: str

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def retrieve(state: State):
    results = vectorstore.similarity_search(state["question"], k=3)
    return {
        "context": [doc.page_content for doc in results]
    }

def generate(state: State):
    prompt = (
        f"Answer the insurance question using only this context:\n"
        f"{chr(10).join(state['context'])}\n\n"
        f"Question: {state['question']}"
    )
    response = llm.invoke([HumanMessage(content=prompt)])
    return {"answer": response.content}

graph = StateGraph(State)
graph.add_node("retrieve", retrieve)
graph.add_node("generate", generate)
graph.add_edge(START, "retrieve")
graph.add_edge("retrieve", "generate")
graph.add_edge("generate", END)

app = graph.compile()
  1. Add conversational memory in Redis

For insurance workflows, session memory matters. A user may ask about coverage first and then follow up with claim steps; storing state in Redis keeps those interactions consistent across requests.

from langgraph.checkpoint.redis import RedisSaver

checkpointer = RedisSaver.from_conn_string(REDIS_URL)
app_with_memory = graph.compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "claim-session-123"}}
result = app_with_memory.invoke(
    {"question": "Does my policy cover accidental water damage?"},
    config=config,
)

print(result["answer"])
  1. Expose the graph as a reusable function in your app

At this point you can call the graph from an API route, worker, or agent orchestrator. The important part is that retrieval stays in Redis and orchestration stays in LangGraph.

def answer_insurance_question(question: str, thread_id: str):
    config = {"configurable": {"thread_id": thread_id}}
    output = app_with_memory.invoke({"question": question}, config=config)
    return output["answer"]

print(answer_insurance_question(
    "What is the deadline to file a claim?",
    "claim-session-123"
))

Testing the Integration

Run a basic end-to-end test that confirms retrieval works and LangGraph can use cached state.

test_question = "How long do I have to file a claim after an incident?"
response1 = answer_insurance_question(test_question, "test-thread-001")
response2 = answer_insurance_question("And what if I miss it?", "test-thread-001")

print("First response:", response1)
print("Follow-up response:", response2)

Expected output:

First response: Claims must be filed within 30 days of the incident.
Follow-up response: Based on the same policy context, missing the deadline may affect claim eligibility.

If you get empty or irrelevant answers:

  • confirm documents were indexed into the same index_name
  • verify your embeddings model matches between ingestion and query time
  • check Redis connectivity and auth settings
  • inspect whether your LangGraph checkpointer is using the same thread_id

Real-World Use Cases

  • Claims assistant
    • Retrieves policy clauses from Redis and walks users through filing requirements inside a LangGraph flow.
  • Underwriting copilot
    • Pulls risk rules, prior submissions, and product guidelines to support consistent underwriting decisions.
  • Policy Q&A bot
    • Answers coverage questions with traceable source snippets instead of free-form hallucinated responses.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides