How to Build a customer support Agent Using LangGraph in TypeScript for retail banking
A retail banking support agent handles routine customer questions, triages sensitive requests, and routes anything risky to the right human or system. It matters because most banking support volume is repetitive, but the failure modes are expensive: bad advice, leaked PII, broken audit trails, and compliance issues.
Architecture
- •
Chat state
- •Holds the customer message, conversation history, detected intent, risk flags, and final response.
- •In LangGraph this is your shared state object passed between nodes.
- •
Intent router
- •Classifies the request into supported banking flows like card dispute, balance inquiry, statement request, fee explanation, or branch escalation.
- •Keep this deterministic where possible; don’t let the model freewheel on routing.
- •
Policy and compliance gate
- •Checks whether the request contains regulated actions or sensitive data.
- •Blocks actions like changing contact details, disputing transactions above a threshold, or exposing account data without verification.
- •
Knowledge retrieval
- •Pulls bank policy snippets, product FAQs, and support scripts from approved sources.
- •For retail banking, this must be region-aware for data residency and legal wording.
- •
Action executor
- •Calls internal systems for safe operations such as case creation or ticket lookup.
- •Never let the LLM directly hit core banking APIs without validation.
- •
Human handoff
- •Escalates when identity is unverified, confidence is low, or the request is out of policy.
- •This is not optional in banking; it’s part of the workflow.
Implementation
1) Define the graph state and routing types
Use a typed state so each node has a clear contract. In LangGraph for TypeScript, Annotation.Root gives you a structured state schema that works well for production code.
import { Annotation } from "@langchain/langgraph";
export const SupportState = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left = [], right = []) => left.concat(right),
default: () => [],
}),
intent: Annotation<string>({
default: () => "unknown",
reducer: (_, next) => next,
}),
riskFlag: Annotation<boolean>({
default: () => false,
reducer: (_, next) => next,
}),
verified: Annotation<boolean>({
default: () => false,
reducer: (_, next) => next,
}),
response: Annotation<string>({
default: () => "",
reducer: (_, next) => next,
}),
});
2) Build nodes for classification, policy checks, and response generation
This pattern keeps routing separate from generation. That separation matters in banking because you want explainable control flow and audit-friendly decisions.
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage } from "@langchain/core/messages";
import { StateGraph, START, END } from "@langchain/langgraph";
import { SupportState } from "./state";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const classifyIntent = async (state: typeof SupportState.State) => {
const lastUserMessage = state.messages.at(-1)?.content ?? "";
const prompt = [
{
role: "system",
content:
"Classify retail banking support intent into one of: balance_inquiry, card_dispute, fee_question, statement_request, human_handoff.",
},
{ role: "user", content: String(lastUserMessage) },
];
const result = await llm.invoke(prompt);
const intent = String(result.content).trim();
return { intent };
};
const policyCheck = async (state: typeof SupportState.State) => {
const risky =
state.intent === "statement_request" ||
state.intent === "card_dispute" ||
/ssn|password|pin|otp/i.test(String(state.messages.at(-1)?.content ?? ""));
return {
riskFlag: risky,
verified: state.verified && !risky ? true : state.verified,
};
};
const generateAnswer = async (state: typeof SupportState.State) => {
const userText = String(state.messages.at(-1)?.content ?? "");
const prompt = [
{
role: "system",
content:
"You are a retail banking support agent. Do not request full card numbers, PINs, passwords, or OTPs. If verification is required or policy blocks the action, tell the user you are transferring to a specialist.",
},
{ role: "user", content: userText },
];
const result = await llm.invoke(prompt);
return { response: String(result.content), messages: [new AIMessage(String(result.content))] };
};
3) Wire conditional routing with StateGraph
This is where LangGraph earns its keep. You can route based on intent and policy instead of forcing every message through one giant prompt.
const routeAfterPolicy = (state: typeof SupportState.State) => {
if (!state.verified || state.riskFlag) return "handoff";
return "answer";
};
const graph = new StateGraph(SupportState)
.addNode("classify", classifyIntent)
.addNode("policy", policyCheck)
.addNode("answer", generateAnswer)
.addNode("handoff", async () => ({
response:
"I’m transferring this to a banking specialist so we can complete verification and handle your request safely.",
messages: [new AIMessage("I’m transferring this to a banking specialist so we can complete verification and handle your request safely.")],
}))
.addEdge(START, "classify")
.addEdge("classify", "policy")
.addConditionalEdges("policy", routeAfterPolicy, {
answer: "answer",
handoff: "handoff",
})
.addEdge("answer", END)
.addEdge("handoff", END);
export const app = graph.compile();
4) Invoke it with an auditable input payload
For retail banking you should pass metadata alongside the message stream so you can log jurisdiction and channel context. That helps with residency rules and downstream audit requirements.
import { HumanMessage } from "@langchain/core/messages";
import { app } from "./graph";
async function main() {
const result = await app.invoke({
messages: [new HumanMessage("Can you explain why I was charged a monthly fee?")],
verified: true,
intent: "unknown",
riskFlag: false,
response: "",
});
console.log(result.response);
}
main();
Production Considerations
- •
Deployment
- •Keep model calls behind your own service boundary.
- •Route EU customer traffic to EU-hosted infrastructure if your residency policy requires it.
- •
Monitoring
- •Log every node transition with
intent,riskFlag,verified, and final route. - •Store immutable audit records for disputed conversations and compliance review.
- •Log every node transition with
- •
Guardrails
- •Redact PII before sending text to the model.
- •Block prompts that ask for PINs, OTPs, CVV codes, full PANs, or online banking passwords.
- •Add hard thresholds for disputes and payment reversals so humans approve them.
- •
Fallbacks
- •If classification confidence is low or upstream tools fail, force human handoff.
- •Banking support should degrade to safe escalation rather than guessing.
Common Pitfalls
- •
Letting the LLM decide everything
- •Mistake: using one prompt to classify, verify identity, answer policy questions, and execute actions.
- •Fix: split routing, compliance checks, and generation into separate nodes with explicit edges.
- •
Ignoring auditability
- •Mistake: only storing the final answer.
- •Fix: persist node-level decisions, timestamps, model version, input hashes, and handoff reasons.
- •
Treating PII like normal chat text
- •Mistake: sending raw account numbers or identifiers into prompts and logs.
- •Fix: redact sensitive fields before inference and keep secrets out of conversation history unless absolutely required by workflow controls.
- •
Skipping region controls
- •Mistake: deploying one global agent endpoint for all customers.
- •Fix: enforce jurisdiction-aware routing so customer data stays in approved regions and local regulatory wording is used where required.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit