How to Build a customer support Agent Using LangGraph in Python for retail banking
A retail banking support agent handles routine customer requests like balance questions, card disputes, fee explanations, branch hours, and escalation routing. It matters because these are high-volume interactions where speed, accuracy, compliance, and auditability directly affect customer trust and operational cost.
Architecture
- •
User interface layer
- •Chat widget, mobile banking inbox, or authenticated web portal.
- •Passes customer context like locale, account type, and session ID.
- •
Intent router
- •Classifies requests into support flows such as card issues, payments, account access, or complaints.
- •Keeps the agent from answering everything with one generic prompt.
- •
Policy and compliance gate
- •Checks whether the request involves regulated actions like disputes, disclosures, or identity verification.
- •Blocks unsupported actions and routes them to a human agent when needed.
- •
Knowledge retrieval layer
- •Pulls from approved sources: product FAQs, fee schedules, branch policies, and contact center scripts.
- •Avoids free-form answers for anything that must match bank policy exactly.
- •
Conversation state
- •Stores message history, detected intent, user metadata, and escalation status.
- •LangGraph’s state model is a good fit here because banking conversations are multi-step and conditional.
- •
Human handoff path
- •Escalates to a live agent with a structured summary.
- •Preserves audit trail and reduces back-and-forth for the customer.
Implementation
1. Define the graph state and nodes
Use StateGraph to model the conversation as a controlled workflow. In retail banking, this is better than a single chain because you need explicit branches for compliance checks and escalation.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage
class SupportState(TypedDict):
messages: Annotated[list, add_messages]
intent: str
needs_handoff: bool
response: str
def classify_intent(state: SupportState):
last_msg = state["messages"][-1].content.lower()
if "card" in last_msg or "dispute" in last_msg:
return {"intent": "card_support", "needs_handoff": False}
if "fee" in last_msg or "charge" in last_msg:
return {"intent": "fees", "needs_handoff": False}
if "transfer" in last_msg or "wire" in last_msg:
return {"intent": "payments", "needs_handoff": True}
return {"intent": "general", "needs_handoff": False}
def answer_general(state: SupportState):
return {
"response": (
"I can help with general account questions. "
"For anything involving transfers or account changes, I’ll route you to a specialist."
),
"messages": [AIMessage(content="I can help with general account questions.")]
}
def handoff_to_human(state: SupportState):
summary = f"Intent={state['intent']}; latest={state['messages'][-1].content}"
return {
"response": f"I’m connecting you to a human agent. Summary: {summary}",
"messages": [AIMessage(content="I’m connecting you to a human agent.")]
}
2. Add conditional routing with add_conditional_edges
This is where LangGraph earns its keep. You define explicit transitions so regulated flows do not accidentally continue into an LLM response.
def route(state: SupportState):
if state["needs_handoff"]:
return "handoff"
if state["intent"] == "general":
return "general"
return "general"
graph = StateGraph(SupportState)
graph.add_node("classify", classify_intent)
graph.add_node("general", answer_general)
graph.add_node("handoff", handoff_to_human)
graph.add_edge(START, "classify")
graph.add_conditional_edges(
"classify",
route,
{
"general": "general",
"handoff": "handoff",
},
)
graph.add_edge("general", END)
graph.add_edge("handoff", END)
app = graph.compile()
3. Invoke the graph with customer messages
In production you’ll usually attach authenticated customer context before calling invoke. Keep PII out of logs unless your retention policy explicitly allows it.
result = app.invoke(
{
"messages": [HumanMessage(content="What is the fee for an international wire transfer?")],
"intent": "",
"needs_handoff": False,
"response": ""
}
)
print(result["response"])
4. Replace the stubbed logic with approved tools
For real retail banking support, wire the graph to approved internal systems only. That usually means:
- •FAQ retrieval from a vetted knowledge base
- •Case creation in CRM
- •Identity verification checks before sensitive actions
- •Human escalation when policy requires it
A common pattern is to keep the LLM out of decision-making for restricted actions and use it only for summarization or drafting responses from retrieved policy text.
Production Considerations
- •
Compliance controls
- •Put hard rules around identity verification before showing balances, transaction details, or case-specific information.
- •Log every decision point for auditability: intent classification, handoff reason, retrieved source IDs.
- •
Data residency
- •Keep prompts, embeddings, vector stores, and conversation logs inside approved regions.
- •If your bank operates across jurisdictions, separate deployments by region instead of centralizing customer data.
- •
Monitoring
- •Track handoff rate, containment rate, false routing rate, and policy violations.
- •Add traces for each node execution so compliance teams can reconstruct how an answer was produced.
- •
Guardrails
- •Block advice that looks like financial recommendations unless your legal team has signed off.
- •Use deterministic rules for disputes, fraud claims, overdraft complaints, and account closure requests.
Common Pitfalls
- •
Using one giant prompt for every request
- •This makes it hard to enforce policy boundaries.
- •Fix it by splitting the flow into router → policy gate → answer/handoff nodes.
- •
Letting the model decide regulated actions
- •A model should not decide whether a transfer can proceed or whether identity checks are sufficient.
- •Fix it with explicit rule-based checks before any action node runs.
- •
Ignoring audit requirements
- •If you cannot explain why the agent escalated or answered a question a certain way, you will fail internal review fast.
- •Fix it by storing node outputs, timestamps, source document IDs, and handoff summaries per session.
Retail banking support agents work best when they are narrow by design. LangGraph gives you the control plane you need: explicit states, conditional routing, human fallback paths, and enough structure to satisfy compliance without turning every conversation into a brittle workflow engine.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit