How to Integrate LangGraph for pension funds with Redis for RAG
LangGraph for pension funds gives you the orchestration layer for multi-step agent workflows. Redis gives you low-latency retrieval for embeddings, chat state, and document chunks, which is exactly what you want when your pension assistant needs to answer policy, benefits, and compliance questions with grounded context.
Prerequisites
- •Python 3.10+
- •A LangGraph project installed and configured
- •Redis 7+ running locally or via Redis Cloud
- •An embeddings model available through your stack
- •Access to pension fund documents in text/PDF form
- •
pippackages:- •
langgraph - •
langchain - •
langchain-openaior your embedding provider - •
langchain-redis - •
redis
- •
Install the packages:
pip install langgraph langchain langchain-openai langchain-redis redis
Set your environment variables:
export OPENAI_API_KEY="your-key"
export REDIS_URL="redis://localhost:6379"
Integration Steps
1) Connect to Redis and prepare your vector store
Start by creating a Redis client and a vector store for pension fund documents. This is where your chunked policy docs, member handbooks, and benefit rules will live.
import os
from redis import Redis
from langchain_openai import OpenAIEmbeddings
from langchain_redis import RedisVectorStore
redis_url = os.environ["REDIS_URL"]
redis_client = Redis.from_url(redis_url)
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vector_store = RedisVectorStore(
redis_client=redis_client,
index_name="pension_docs",
embedding=embeddings,
)
If you already have document chunks, add them now:
from langchain_core.documents import Document
docs = [
Document(
page_content="Retirement benefits are available at age 60 with 10 years of service.",
metadata={"source": "benefits_policy.pdf", "section": "retirement"},
),
Document(
page_content="Early withdrawal penalties apply unless the member qualifies under hardship rules.",
metadata={"source": "member_handbook.pdf", "section": "withdrawals"},
),
]
vector_store.add_documents(docs)
2) Build a retriever for RAG
LangGraph should not fetch raw data directly from your source system on every turn. Use Redis as the retrieval layer so the graph can pull only the top matches.
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
A direct similarity search is useful for debugging too:
results = vector_store.similarity_search("When can a member retire?", k=2)
for doc in results:
print(doc.page_content)
3) Define the LangGraph state and nodes
Use LangGraph to manage the flow: receive question, retrieve context from Redis, generate answer, return result. For this pattern, StateGraph is enough.
from typing import TypedDict, List
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
class GraphState(TypedDict):
question: str
context: List[str]
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
Create the retrieval node:
def retrieve_context(state: GraphState):
docs = retriever.invoke(state["question"])
return {
"context": [doc.page_content for doc in docs]
}
Create the generation node:
def generate_answer(state: GraphState):
prompt = f"""
You are a pension fund assistant.
Use only the context below to answer the question.
Question: {state["question"]}
Context:
{chr(10).join(state["context"])}
Answer clearly and cite policy language where relevant.
"""
response = llm.invoke(prompt)
return {"answer": response.content}
4) Wire the graph together
Now connect both nodes with LangGraph’s add_node, add_edge, and compile methods.
workflow = StateGraph(GraphState)
workflow.add_node("retrieve_context", retrieve_context)
workflow.add_node("generate_answer", generate_answer)
workflow.set_entry_point("retrieve_context")
workflow.add_edge("retrieve_context", "generate_answer")
workflow.add_edge("generate_answer", END)
app = workflow.compile()
Run it with a real question:
result = app.invoke({"question": "At what age can a member retire?"})
print(result["answer"])
5) Add session memory in Redis for repeat users
For pension assistants, conversation history matters. Store per-user chat state in Redis so members do not lose context between turns.
import json
def save_turn(session_id: str, question: str, answer: str):
key = f"pension_chat:{session_id}"
payload = {"question": question, "answer": answer}
redis_client.rpush(key, json.dumps(payload))
You can call this after each graph execution:
session_id = "member_123"
question = "What happens if I leave before retirement?"
result = app.invoke({"question": question})
save_turn(session_id, question, result["answer"])
print(result["answer"])
Testing the Integration
Use one known policy statement and verify that retrieval pulls it from Redis before generation.
test_question = "What is the retirement age?"
result = app.invoke({"question": test_question})
print("ANSWER:")
print(result["answer"])
print("\nRETRIEVED DOCS:")
for doc in retriever.invoke(test_question):
print("-", doc.page_content)
Expected output:
ANSWER:
Members are eligible for retirement at age 60 with 10 years of service...
RETRIEVED DOCS:
- Retirement benefits are available at age 60 with 10 years of service.
If you get an answer without relevant context, check these first:
- •The document was actually indexed into Redis
- •Your embedding model matches between indexing and querying
- •The retriever
kvalue is high enough to surface the right chunk - •Your prompt restricts the model to retrieved context only
Real-World Use Cases
- •Pension member self-service agent that answers retirement eligibility, contribution rules, and withdrawal conditions from approved policy docs.
- •Internal compliance assistant that helps ops teams trace policy wording across handbook versions stored in Redis.
- •Claims or benefits triage bot that uses LangGraph to route questions through retrieval, validation, and escalation steps before responding.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit