How to Build a customer support Agent Using LangGraph in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphtypescriptbanking

A banking customer support agent built with LangGraph handles account questions, card disputes, fee explanations, and routing to human agents when the request crosses policy boundaries. The point is not just automation; it is controlled automation with state, auditability, and deterministic handoffs, which matters when you are dealing with regulated customer data and compliance obligations.

Architecture

  • State model

    • Holds the conversation, authenticated customer context, risk flags, and final response.
    • Keep it explicit so every node can read and update the same contract.
  • Classifier node

    • Detects intent: balance inquiry, card replacement, fee dispute, fraud concern, loan question, or escalation.
    • In banking, this decides whether the agent can answer or must route out.
  • Policy/guardrail node

    • Checks whether the request contains sensitive actions like changing contact details, disputing transactions, or revealing account data.
    • Enforces compliance rules before any LLM-generated response goes out.
  • Tool layer

    • Calls internal systems for customer profile lookup, product FAQs, case creation, or ticket routing.
    • Never let the model invent account data; fetch it from systems of record.
  • Response composer

    • Turns structured tool output into a customer-facing answer.
    • Keeps tone consistent and ensures disclaimers are included where required.
  • Escalation path

    • Routes to a human agent when confidence is low or policy requires it.
    • Preserve transcript and state for audit and handoff.

Implementation

1) Define the graph state and nodes

Use a typed state so your graph is explicit about what flows between nodes. For banking support, that means message history plus flags for compliance review and escalation.

import { StateGraph, START, END } from "@langchain/langgraph";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";

type SupportState = {
  messages: (HumanMessage | AIMessage)[];
  intent?: "balance" | "card_issue" | "fee_dispute" | "fraud" | "other";
  needsEscalation?: boolean;
  complianceHold?: boolean;
  answer?: string;
};

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function classifyIntent(state: SupportState): Promise<Partial<SupportState>> {
  const last = state.messages[state.messages.length - 1]?.content ?? "";
  const text = String(last).toLowerCase();

  if (text.includes("fraud") || text.includes("unauthorized")) return { intent: "fraud", needsEscalation: true };
  if (text.includes("fee") || text.includes("charge")) return { intent: "fee_dispute", complianceHold: true };
  if (text.includes("card") || text.includes("replace")) return { intent: "card_issue" };
  if (text.includes("balance")) return { intent: "balance" };
  return { intent: "other" };
}

async function policyCheck(state: SupportState): Promise<Partial<SupportState>> {
  if (state.intent === "fraud" || state.intent === "fee_dispute") {
    return { complianceHold: true };
  }
  return {};
}

2) Add tool-backed retrieval for approved answers

For banking support you want narrow tools. A FAQ lookup is fine; direct account access should be separate and strongly authenticated.

const BANK_FAQ: Record<string, string> = {
  balance: "You can check your balance in the mobile app or online banking under Accounts.",
  card_issue: "To replace a lost or damaged card, we can start a replacement request after verification.",
};

async function fetchApprovedAnswer(state: SupportState): Promise<Partial<SupportState>> {
  const key = state.intent ?? "other";
  const answer = BANK_FAQ[key] ?? null;

  if (!answer) {
    return { needsEscalation: true };
    }

  return {
    answer,
    needsEscalation: false,
    complianceHold: false,
    messages: [...state.messages, new AIMessage(answer)],
  };
}

3) Compose an escalation-safe response

If policy blocks the answer, do not improvise. Return a controlled message that explains the next step and preserves auditability.

async function composeResponse(state: SupportState): Promise<Partial<SupportState>> {
  if (state.complianceHold) {
    return {
      answer:
        "I can help route this securely. For this request, I need to connect you with a specialist who can verify your identity and review the case.",
      needsEscalation: true,
    };
  }

  if (state.answer) {
    return { answer: state.answer };
    }

  return {
    answer:
      "I’m connecting you with support so we can handle this correctly.",
    needsEscalation: true,
  };
}

4) Wire the graph with conditional routing

This is where LangGraph pays off. You are not building a linear chain; you are building a controlled workflow with branches based on bank policy.

const graph = new StateGraph<SupportState>()
  .addNode("classifyIntent", classifyIntent)
  .addNode("policyCheck", policyCheck)
  .addNode("fetchApprovedAnswer", fetchApprovedAnswer)
  
.addNode("composeResponse", composeResponse)
.addEdge(START, "classifyIntent")
.addEdge("classifyIntent", "policyCheck")
.addConditionalEdges("policyCheck", (state) => {
    if (state.needsEscalation || state.complianceHold) return "composeResponse";
    return "fetchApprovedAnswer";
})
.addEdge("fetchApprovedAnswer", "composeResponse")
.addConditionalEdges("composeResponse", (state) => {
    if (state.needsEscalation) return END;
    return END;
});

const app = graph.compile();

async function run() {
  const result = await app.invoke({
    messages: [new HumanMessage("I need help replacing my debit card")],
  });

console.log(result.answer);
}

run();

Production Considerations

  • Audit logging

Every node transition should be logged with input state hash, output state hash, timestamp, and decision reason. In banking reviews, you need to show why a user was escalated or why an answer was returned.

  • Data residency

Keep customer transcripts and any retrieved account data inside approved regions. If your model endpoint is outside your residency boundary, do not send raw PII there; tokenize or redact first.

  • Guardrails

Put policy checks before any LLM-generated customer response. Block unsupported actions like changing addresses or revealing full account numbers unless identity verification has been completed in an approved channel.

  • Monitoring

Track escalation rate, false deflection rate, hallucination incidents, and average handoff time to humans. For regulated environments, also monitor how often sensitive intents hit compliance hold paths.

Common Pitfalls

  • Using one giant prompt instead of graph nodes

This makes policy enforcement brittle. Split classification, policy checks, retrieval, and response into separate nodes so each control point is testable.

  • Letting the model answer from memory

Banking answers must come from approved sources. Use tools or static knowledge bases for product rules; never trust free-form generation for balances, fees, or eligibility.

  • Skipping explicit escalation logic

If you do not define when to hand off to a human agent, the model will try to continue talking. Set hard conditions for fraud reports, disputes, identity changes, and anything involving regulated decisions.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides