How to Build a customer support Agent Using LangGraph in Python for banking
A banking customer support agent handles routine questions like balance explanations, card status, fee disputes, branch hours, and password reset flows without forcing a human to read every ticket. The point is not just deflection; it is consistent policy enforcement, faster resolution times, and an auditable path for every answer the agent gives.
Architecture
- •
Ingress layer
- •Accepts chat or ticket text from web, mobile, or CRM.
- •Normalizes user identity, locale, and channel metadata.
- •
Intent router
- •Classifies the request into support categories like card issue, payment dispute, account access, or general FAQ.
- •Decides whether the graph should answer directly or escalate.
- •
Policy and compliance gate
- •Blocks sensitive actions unless the user is authenticated and authorized.
- •Enforces rules around KYC data, account details, and regulated advice.
- •
Tool layer
- •Wraps banking systems: account lookup, card status, ticket creation, knowledge base search.
- •Keeps external calls explicit so every action is traceable.
- •
Conversation state
- •Stores the minimal state needed across steps: intent, user tier, risk flags, retrieved facts.
- •Avoids dumping raw PII into memory.
- •
Human handoff node
- •Escalates high-risk or ambiguous cases to a live agent with full context.
- •Preserves audit trail and reason for escalation.
Implementation
1) Define state and tools
Use StateGraph with a typed state object. Keep the state small and explicit; in banking, that matters for auditability and data minimization.
from typing import TypedDict, Annotated
import operator
from langgraph.graph import StateGraph, START, END
from langchain_core.tools import tool
class SupportState(TypedDict):
messages: Annotated[list[str], operator.add]
intent: str
risk_flag: bool
response: str
@tool
def lookup_faq(query: str) -> str:
"""Search approved banking FAQ content."""
if "card" in query.lower():
return "Card replacements take 3-5 business days. Expedited shipping may apply."
return "I can help with common banking questions."
@tool
def create_ticket(summary: str) -> str:
"""Create a support ticket for human follow-up."""
return f"TICKET-12345 created for: {summary}"
2) Build graph nodes with deterministic routing
For production banking flows, keep routing rules simple. You can use an LLM later for classification if needed, but the control points should remain explicit.
def classify_intent(state: SupportState) -> SupportState:
text = " ".join(state["messages"]).lower()
if any(k in text for k in ["chargeback", "fraud", "dispute"]):
intent = "high_risk"
risk_flag = True
elif any(k in text for k in ["card", "fee", "balance"]):
intent = "faq"
risk_flag = False
else:
intent = "handoff"
risk_flag = True
return {
**state,
"intent": intent,
"risk_flag": risk_flag,
}
def answer_faq(state: SupportState) -> SupportState:
query = state["messages"][-1]
result = lookup_faq.invoke({"query": query})
return {**state, "response": result}
def escalate(state: SupportState) -> SupportState:
summary = f"Intent={state['intent']}; latest={state['messages'][-1]}"
ticket = create_ticket.invoke({"summary": summary})
return {
**state,
"response": f"I’ve handed this to a specialist. {ticket}",
}
3) Add conditional edges and compile the graph
This is the core LangGraph pattern: route based on state. add_conditional_edges keeps the flow readable and easy to audit.
def route(state: SupportState) -> str:
if state["risk_flag"]:
return "escalate"
if state["intent"] == "faq":
return "answer_faq"
return "escalate"
graph = StateGraph(SupportState)
graph.add_node("classify_intent", classify_intent)
graph.add_node("answer_faq", answer_faq)
graph.add_node("escalate", escalate)
graph.add_edge(START, "classify_intent")
graph.add_conditional_edges(
"classify_intent",
route,
{
"answer_faq": "answer_faq",
"escalate": "escalate",
},
)
graph.add_edge("answer_faq", END)
graph.add_edge("escalate", END)
app = graph.compile()
4) Run the agent with a banking-safe input shape
In real deployments you would pass authenticated session metadata separately from user text. Do not mix raw account data into prompts unless you have a clear retention and residency policy.
result = app.invoke({
"messages": ["My debit card was stolen and I see a suspicious charge."],
"intent": "",
"risk_flag": False,
"response": "",
})
print(result["response"])
Production Considerations
- •
Compliance controls
- •Never let the agent reveal account balances, transaction history, or personal identifiers without verified auth.
- •Log every tool call with request ID, user ID hash, decision path, and policy outcome.
- •
Data residency
- •Keep conversation state and vector stores in-region if your bank has jurisdictional requirements.
- •Avoid sending regulated customer data to external models unless your legal team has approved that processing path.
- •
Monitoring
- •Track escalation rate, tool-call error rate, false positive risk flags, and average handle time.
- •Sample transcripts for policy violations like unauthorized disclosure or unsupported financial advice.
- •
Guardrails
- •Use allowlisted tools only.
- •Put hard stops on actions like card cancellation or dispute initiation unless authentication level is sufficient.
- •Add deterministic fallback responses when classification confidence is low.
Common Pitfalls
- •
Storing too much PII in graph state
- •Bad pattern: keeping full account numbers or SSNs in
messages. - •Fix: store references or masked tokens only; fetch sensitive data from secure systems on demand.
- •Bad pattern: keeping full account numbers or SSNs in
- •
Letting the LLM decide compliance boundaries
- •Bad pattern: “If the model thinks it’s okay.”
- •Fix: enforce policy before tool execution using explicit conditional edges or guard nodes.
- •
Skipping human handoff design
- •Bad pattern: forcing every edge case through automation.
- •Fix: escalate fraud claims, complaints about unauthorized transactions, legal threats, and ambiguous identity cases with a structured summary.
- •
No audit trail for decisions
- •Bad pattern: logging only final answers.
- •Fix: persist intent classification, route choice, tool inputs/outputs, and escalation reason so compliance teams can reconstruct the flow later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit