How to Build a customer support Agent Using LangGraph in TypeScript for fintech
A customer support agent for fintech handles account questions, transaction disputes, KYC status checks, card issues, and policy explanations without exposing sensitive data or hallucinating policy answers. It matters because support is where compliance, trust, and customer experience collide; one bad response can create regulatory risk or a chargeback mess.
Architecture
- •
User input layer
- •Accepts chat messages from web, mobile, or internal support tools.
- •Normalizes metadata like customer ID, locale, channel, and request type.
- •
Policy and compliance router
- •Classifies the request before any model call.
- •Routes sensitive cases like disputes, PII requests, fraud claims, and account closures to stricter paths.
- •
LangGraph state machine
- •Orchestrates the flow between classification, retrieval, tool use, and response generation.
- •Keeps the conversation state explicit so every decision is traceable.
- •
Knowledge retrieval layer
- •Pulls from approved sources: product FAQs, fee schedules, dispute policies, KYC docs.
- •Avoids free-form answers when policy text must be exact.
- •
Tooling layer
- •Calls internal services for account status, transaction lookup, ticket creation, and escalation.
- •Uses narrow tools with strict schemas instead of giving the model direct system access.
- •
Audit and observability
- •Logs graph transitions, tool calls, citations, and final responses.
- •Stores enough context for compliance review without persisting raw PII unnecessarily.
Implementation
- •Define the graph state and the core nodes
Use a typed state object so your agent carries only what it needs. For fintech support, keep the user message, intent, retrieved policy snippets, tool results, and final answer separate.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const SupportState = Annotation.Root({
messages: Annotation<string[]>({
reducer: (state = [], update) => state.concat(update),
default: () => [],
}),
intent: Annotation<string>({
reducer: (_, update) => update,
default: () => "unknown",
}),
policyContext: Annotation<string>({
reducer: (_, update) => update,
default: () => "",
}),
toolResult: Annotation<string>({
reducer: (_, update) => update,
default: () => "",
}),
finalAnswer: Annotation<string>({
reducer: (_, update) => update,
default: () => "",
}),
});
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function classifyIntent(state: typeof SupportState.State) {
const lastMessage = state.messages[state.messages.length - 1] ?? "";
const schema = z.object({
intent: z.enum(["balance", "card_issue", "dispute", "kyc", "fees", "fraud", "other"]),
});
const result = await llm.withStructuredOutput(schema).invoke([
{ role: "system", content: "Classify fintech support intent." },
{ role: "user", content: lastMessage },
]);
return { intent: result.intent };
}
- •Add retrieval and escalation paths
For fintech support you do not want every request going through the same prompt. Disputes and fraud should route differently from simple FAQ questions. Use a conditional edge to decide whether to retrieve policy text or escalate.
async function retrievePolicy(state: typeof SupportState.State) {
const policyMap: Record<string, string> = {
fees: "Fee refunds are only allowed within 30 days of posting unless local regulation requires otherwise.",
dispute:
"Card disputes must be filed within the allowed window. Do not promise chargeback outcomes.",
kyc:
"KYC review timelines vary by jurisdiction. Never expose verification vendor details.",
fraud:
"For suspected fraud, advise immediate card freeze and route to a human agent.",
balance:
"Balance inquiries may be answered from account data if authenticated.",
card_issue:
"Card replacement requires identity verification before address confirmation.",
other:
"Use approved help center content only.",
};
return { policyContext: policyMap[state.intent] ?? policyMap.other };
}
async function draftAnswer(state: typeof SupportState.State) {
const prompt = [
{
role: "system",
content:
"You are a fintech support agent. Follow policy exactly. Do not invent policies. Do not expose PII.",
},
{
role: "user",
content:
`Customer message: ${state.messages[state.messages.length - 1]}\n` +
`Intent: ${state.intent}\n` +
`Policy context: ${state.policyContext}\n` +
`Tool result: ${state.toolResult}`,
},
] as const;
const response = await llm.invoke(prompt);
- •Wire in tools for authenticated account actions
Keep tools narrow. A balance lookup tool should return only what the agent needs to answer the question; do not hand back full account records.
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
const getBalance = tool(
async ({ customerId }: { customerId: string }) => {
// Replace with real service call
return `Available balance for customer ${customerId}: USD 1240.55`;
},
{
name: "get_balance",
description: "Fetch current available balance for an authenticated customer",
schema: z.object({
customerId: z.string(),
}),
}
);
const tools = new ToolNode([getBalance]);
- •Build the graph and execute it
Use StateGraph to connect classification, retrieval, optional tool use, and response generation. This gives you deterministic control over sensitive branches.
function routeByIntent(state: typeof SupportState.State) {
if (state.intent === "fraud" || state.intent === "dispute" || state.intent === "kyc") {
return "retrievePolicy";
const graph = new StateGraph(SupportState)
.addNode("classifyIntent", classifyIntent)
export async function runSupportAgent(messageText: string) {
const app = graph.compile();
const result = await app.invoke({
Production Considerations
- •
Auditability
- •Persist graph node transitions, tool inputs/outputs, and final responses.
- •
Keep redacted logs for PII-sensitive flows.
- •
Data residency
- •
Route requests to region-specific deployments when policies require customer data to stay in-country.
- •
Do not send raw account identifiers to third-party model endpoints unless your legal team has signed off on that path.
- •
Guardrails
- •
Block unsupported actions like password resets or wire transfers unless they are behind authenticated internal tools.
- •
Add hard rules for fraud language so the model never promises refunds or chargeback outcomes.
- •
Monitoring
- •
Track escalation rate by intent category.
- •
Watch for hallucinated policy citations and repeated fallback responses; both usually mean your retrieval layer is weak or stale.
Common Pitfalls
- •
Letting the model answer from memory
- •Fintech policies change often.
- •Avoid this by routing fee disputes, KYC questions, and fraud cases through approved retrieval content every time.
- •
Using broad tools with too much access
- •A single “account service” tool that returns everything is a data leakage risk.
- •Split tools by task and return only minimal fields needed for the response.
- •
Skipping human handoff logic
- •Some cases should never be fully automated.
- •Escalate immediately for suspected fraud attribution errors, legal complaints, sanctions-related issues, or identity verification failures.
- •
Ignoring jurisdiction-specific behavior
- •Refund rules in one region may not apply in another.
- •Carry locale and residency metadata through the graph so your routing logic can enforce country-specific policy before generating an answer.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit