How to Build a customer support Agent Using LangGraph in Python for insurance
A customer support agent for insurance handles policy questions, claim status checks, document requests, and handoff to a human when the request gets sensitive or ambiguous. It matters because insurance support sits inside a compliance-heavy workflow: you need traceability, controlled responses, and a clean boundary between automated help and regulated advice.
Architecture
- •
User intake node
- •Normalizes the customer message.
- •Captures metadata like policy number, region, and channel.
- •
Intent router
- •Classifies the request into categories like
policy_info,claim_status,billing,document_request, orhuman_handoff. - •Keeps the flow deterministic before any external calls happen.
- •Classifies the request into categories like
- •
Insurance tools layer
- •Calls approved internal systems:
- •policy lookup
- •claims system
- •document generation
- •CRM notes
- •Never lets the model invent policy details.
- •Calls approved internal systems:
- •
Response composer
- •Turns structured data into a customer-facing answer.
- •Applies compliance language and avoids unsupported advice.
- •
Escalation node
- •Routes to a human agent when confidence is low, the customer disputes coverage, or legal/regulatory language appears.
- •
Audit and state store
- •Persists every decision path, tool call, and final response.
- •Needed for QA, compliance review, and incident investigation.
Implementation
1) Define the graph state and supporting types
Use a typed state so your workflow is explicit. For insurance support, keep the raw message, detected intent, tool results, and final answer in state.
from typing import TypedDict, Literal, Optional, Dict, Any
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command
class SupportState(TypedDict):
user_message: str
policy_number: Optional[str]
intent: Optional[Literal["policy_info", "claim_status", "billing", "document_request", "human_handoff"]]
tool_result: Optional[Dict[str, Any]]
response: Optional[str]
escalate: bool
def detect_intent(message: str) -> str:
msg = message.lower()
if "claim" in msg:
return "claim_status"
if "bill" in msg or "premium" in msg:
return "billing"
if "document" in msg or "proof of insurance" in msg:
return "document_request"
if "coverage" in msg or "policy" in msg:
return "policy_info"
return "human_handoff"
2) Add nodes for routing and tool execution
Keep tool access behind explicit nodes. In insurance workflows this is where you connect approved systems only.
def route_intent(state: SupportState) -> SupportState:
intent = detect_intent(state["user_message"])
return {"intent": intent, "escalate": intent == "human_handoff"}
def lookup_claim(state: SupportState) -> SupportState:
claim_id = state.get("policy_number", "UNKNOWN")
result = {
"claim_id": claim_id,
"status": "Under Review",
"next_step": "Adjuster will update within 2 business days",
}
return {"tool_result": result}
def compose_response(state: SupportState) -> SupportState:
intent = state["intent"]
tool_result = state.get("tool_result") or {}
if intent == "claim_status":
response = (
f"Your claim {tool_result.get('claim_id', '')} is currently "
f"{tool_result.get('status', 'not available')}. "
f"{tool_result.get('next_step', '')}"
)
elif intent == "billing":
response = (
"I can help with billing questions. For payment disputes or premium changes "
"that affect coverage terms, I’ll connect you to a licensed specialist."
)
elif intent == "document_request":
response = (
"I can generate your requested insurance document after verifying your policy details."
)
else:
response = (
"I’m transferring this to a human agent so they can review your request safely."
)
return {"response": response}
3) Build the LangGraph workflow with conditional routing
This is the part most teams get wrong: use graph edges for control flow instead of burying routing inside one giant prompt. That makes auditability much better.
def should_escalate(state: SupportState) -> str:
if state.get("escalate"):
return "escalate"
if state["intent"] == "claim_status":
return "lookup_claim"
return "compose_response"
def escalate_to_human(state: SupportState) -> SupportState:
return {
"response": (
"I’m connecting you to a human agent. "
"Please stay on the line while we review your account."
)
}
graph = StateGraph(SupportState)
graph.add_node("route_intent", route_intent)
graph.add_node("lookup_claim", lookup_claim)
graph.add_node("compose_response", compose_response)
graph.add_node("escalate_to_human", escalate_to_human)
graph.add_edge(START, "route_intent")
graph.add_conditional_edges(
"route_intent",
should_escalate,
{
"lookup_claim": "lookup_claim",
"compose_response": "compose_response",
"escalate": "escalate_to_human",
},
)
graph.add_edge("lookup_claim", "compose_response")
graph.add_edge("compose_response", END)
graph.add_edge("escalate_to_human", END)
app = graph.compile()
4) Invoke it with real input and inspect the output
You want deterministic execution that can be logged per request. In production you would attach request IDs and persist the full state transition history.
result = app.invoke(
{
"user_message": "Can you check my claim status?",
"policy_number": "#CLM-10492",
"intent": None,
"tool_result": None,
"response": None,
"escalate": False,
}
)
print(result["response"])
Production Considerations
- •
Compliance controls
- •Add policy-aware guardrails so the agent never gives legal advice or coverage determinations beyond approved templates.
- •If a user asks whether something is covered under their policy terms, route to human review unless the answer comes from an approved retrieval source.
- •
Audit logging
- •Store every input message, detected intent, tool call payload, tool result, and final response.
- •Keep immutable logs for regulatory review and dispute resolution.
- •
Data residency
- •Insurance data often has jurisdictional constraints.
- •Keep PII and claims data in-region; do not send sensitive fields to external services unless your architecture explicitly allows it.
- •
Monitoring
- •Track escalation rate, hallucination rate on policy answers, average resolution time, and tool failure rate.
- •Alert on spikes in “human_handoff” because that often signals classifier drift or missing knowledge base coverage.
Common Pitfalls
- •
Letting the model answer coverage questions from memory
- •Avoid this by forcing coverage lookups through retrieval or rules-based systems.
- •If there is no approved source answer, escalate.
- •
Mixing customer chat logic with claims back-office logic
- •Keep support routing separate from claims adjudication.
- •The support agent should explain status and collect context; it should not decide claim outcomes.
- •
Skipping structured state
- •If you pass free-form strings between nodes, debugging becomes painful fast.
- •Use typed state fields for
intent,tool_result, andresponseso every transition is visible.
- •
No human fallback path
- •Insurance customers will ask about denials, exclusions, cancellations, and complaints.
- •Those must have a direct escalation path with clear handoff context attached.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit