How to Build a customer support Agent Using LangGraph in Python for pension funds
A customer support agent for pension funds answers member questions, routes sensitive requests, and escalates anything that touches regulated advice or account changes. It matters because pension support is not generic chat support: you need compliance, auditability, data residency controls, and a clear line between informational guidance and financial advice.
Architecture
- •
Input layer
- •Accepts chat messages from web, mobile, or internal advisor tools.
- •Normalizes metadata like member ID, jurisdiction, language, and consent flags.
- •
Policy gate
- •Classifies whether the request is safe to answer directly.
- •Blocks regulated advice, withdrawals, transfers, beneficiary changes, and identity-sensitive actions.
- •
Knowledge retrieval
- •Pulls from approved pension plan documents, FAQs, contribution rules, vesting schedules, and service SLAs.
- •Uses only curated sources with versioning for audit.
- •
Support workflow graph
- •Orchestrates steps like classify → retrieve → answer → escalate.
- •Keeps state explicit so every decision can be logged and replayed.
- •
Human handoff
- •Escalates to a case queue when confidence is low or the request is restricted.
- •Passes structured context so an agent can continue without re-asking basic questions.
- •
Audit logging
- •Stores prompts, tool calls, policy decisions, retrieved document IDs, and final responses.
- •Required for dispute handling and compliance review.
Implementation
1. Define the state and the routing logic
For pension funds, the state should carry more than chat history. You need jurisdiction, risk flags, and an audit trail so every response can be explained later.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage
class SupportState(TypedDict):
messages: Annotated[list, add_messages]
jurisdiction: str
requires_handoff: bool
audit_log: list[str]
retrieved_docs: list[str]
def classify_request(state: SupportState) -> str:
text = state["messages"][-1].content.lower()
restricted_terms = [
"withdraw", "transfer", "beneficiary",
"change my bank", "investment advice"
]
if any(term in text for term in restricted_terms):
return "handoff"
if "contribution" in text or "vesting" in text or "retirement age" in text:
return "retrieve"
return "answer"
This pattern keeps policy decisions deterministic. In regulated support flows, you want a simple classifier that is easy to test and version.
2. Add retrieval and response nodes
Use retrieval only against approved pension content. If your source of truth lives in a vector store or document index, keep it behind a tool boundary so you can log what was used.
def retrieve_docs(state: SupportState) -> SupportState:
query = state["messages"][-1].content.lower()
# Replace this with your approved retriever
docs = []
if "vesting" in query:
docs = ["Vesting schedule: members vest after 3 years of service."]
elif "contribution" in query:
docs = ["Employee contributions are capped at plan-defined limits."]
state["retrieved_docs"] = docs
state["audit_log"].append(f"retrieved_docs={docs}")
return state
def answer_member(state: SupportState) -> SupportState:
docs = state.get("retrieved_docs", [])
if not docs:
reply = (
"I can help with general plan information. "
"For account-specific actions or regulated guidance, I’ll connect you to a human agent."
)
else:
reply = f"Based on the plan documents: {docs[0]}"
state["messages"].append(AIMessage(content=reply))
state["audit_log"].append("answered_from_approved_sources")
return state
def handoff_to_human(state: SupportState) -> SupportState:
reply = (
"This request needs human review because it may involve account changes "
"or regulated guidance. A support specialist will follow up."
)
state["requires_handoff"] = True
state["messages"].append(AIMessage(content=reply))
state["audit_log"].append("handoff_triggered")
return state
3. Wire the graph with StateGraph, add_conditional_edges, and compile
This is the actual LangGraph pattern you want in production. The graph keeps policy checks explicit instead of hiding them inside one big prompt.
graph = StateGraph(SupportState)
graph.add_node("retrieve", retrieve_docs)
graph.add_node("answer", answer_member)
graph.add_node("handoff", handoff_to_human)
graph.add_conditional_edges(
START,
classify_request,
{
"retrieve": "retrieve",
"answer": "answer",
"handoff": "handoff",
},
)
graph.add_edge("retrieve", "answer")
graph.add_edge("answer", END)
graph.add_edge("handoff", END)
app = graph.compile()
initial_state: SupportState = {
"messages": [HumanMessage(content="What is my vesting schedule?")],
"jurisdiction": "UK",
"requires_handoff": False,
"audit_log": [],
"retrieved_docs": [],
}
result = app.invoke(initial_state)
print(result["messages"][-1].content)
print(result["audit_log"])
That gives you a reproducible workflow with clear branching. In pension support, reproducibility matters because compliance teams will ask why one user got an automated answer and another got escalated.
4. Add checkpoints for traceability
If you need session persistence across multiple turns, use a checkpointer when compiling the graph. That lets you resume conversations without losing compliance context.
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app_with_memory = graph.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "member-123"}}
app_with_memory.invoke(initial_state, config=config)
In production you would replace MemorySaver with a durable store. For pension funds, that usually means encrypted storage in the required region with retention policies aligned to your regulatory obligations.
Production Considerations
- •
Enforce data residency
- •Keep member data and conversation logs in-region if your fund operates under UK/EU/other residency constraints.
- •Do not send personal data to external services unless contracts and controls are already approved.
- •
Log every policy decision
- •Store route decisions from
classify_request, retrieved document IDs, escalation reasons, and final outputs. - •Make logs tamper-evident so audit teams can reconstruct the full path of a response.
- •Store route decisions from
- •
Put hard guardrails around advice
- •The agent should explain plan rules but never recommend investment allocation changes or withdrawal timing.
- •When the user asks for anything interpretive or account-specific, force human handoff.
- •
Monitor drift by intent class
- •Track how often requests fall into “answer,” “retrieve,” or “handoff.”
- •A spike in handoffs usually means your policy rules are too strict or your retrieval coverage is weak.
Common Pitfalls
- •
Treating pension support like generic chatbot support
- •Mistake: letting the model answer everything from one prompt.
- •Fix: separate classification, retrieval, answering, and escalation into distinct LangGraph nodes.
- •
Skipping audit metadata
- •Mistake: storing only the final answer.
- •Fix: persist jurisdiction, retrieved source IDs, route decisions, timestamps, and handoff reasons.
- •
Using broad retrieval over unapproved documents
- •Mistake: indexing emails, old PDFs, or internal notes that were never meant for member-facing answers.
- •Fix: restrict retrieval to approved plan documents with version control and legal sign-off before indexing.
- •
Ignoring regional compliance rules
- •Mistake: deploying one global agent with no locality checks.
- •Fix: branch by jurisdiction early in the graph so UK pension rules do not mix with EU or US retirement plan logic.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit