How to Build a customer support Agent Using LangGraph in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphtypescriptpayments

A payment support agent handles the repetitive but high-risk work in customer operations: failed card charges, refund status checks, chargeback questions, and “where is my money?” tickets. It matters because payments support sits at the intersection of customer trust, compliance, and operational cost, so the agent has to be accurate, auditable, and strict about what it can and cannot do.

Architecture

A production payments support agent built with LangGraph in TypeScript usually needs these components:

  • Intent router
    • Classifies the user message into payment support flows like refund_status, failed_payment, chargeback_info, or handoff_to_human.
  • Policy and compliance gate
    • Blocks sensitive actions, redacts PCI data, and prevents the model from exposing card details, full PANs, or internal risk signals.
  • Payments tool layer
    • Calls internal services for transaction lookup, refund state, dispute metadata, and account status.
  • Conversation state
    • Keeps track of ticket ID, masked payment reference, customer identity confidence, and whether a human handoff is required.
  • Response composer
    • Generates a customer-safe answer with clear next steps and no unsupported claims.
  • Audit logger
    • Stores every decision, tool call, and final response for compliance review and incident analysis.

Implementation

1) Define the graph state and typed messages

Use a typed state so your graph stays explicit. For payments support, I keep the state small and auditable.

import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

type SupportState = {
  messages: Array<HumanMessage | AIMessage>;
  intent?: "refund_status" | "failed_payment" | "chargeback_info" | "handoff_to_human";
  ticketId?: string;
  maskedPaymentRef?: string;
  requiresHandoff?: boolean;
  finalResponse?: string;
};

const SupportAnnotation = Annotation.Root({
  messages: Annotation<Array<HumanMessage | AIMessage>>({
    default: () => [],
    reducer: (left, right) => left.concat(right),
  }),
  intent: Annotation<string | undefined>(),
  ticketId: Annotation<string | undefined>(),
  maskedPaymentRef: Annotation<string | undefined>(),
  requiresHandoff: Annotation<boolean>({
    default: () => false,
    reducer: (_, right) => right,
  }),
  finalResponse: Annotation<string | undefined>(),
});

type GraphState = typeof SupportAnnotation.State;

2) Add nodes for classification, lookup, policy checks, and response generation

This example uses real LangGraph primitives: StateGraph, addNode, addEdge, addConditionalEdges, compile, invoke. The tool calls are placeholders around your internal payments APIs.

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

async function classifyIntent(state: GraphState): Promise<Partial<GraphState>> {
  const last = state.messages[state.messages.length - 1];
  const text = last?.content?.toString().toLowerCase() ?? "";

  if (text.includes("refund")) return { intent: "refund_status" };
  if (text.includes("declined") || text.includes("failed")) return { intent: "failed_payment" };
  if (text.includes("chargeback") || text.includes("dispute")) return { intent: "chargeback_info" };

  return { intent: "handoff_to_human", requiresHandoff: true };
}

async function policyCheck(state: GraphState): Promise<Partial<GraphState>> {
  const rawText = state.messages[state.messages.length - 1]?.content?.toString() ?? "";
  const hasSensitiveData = /\b\d{12,19}\b/.test(rawText); // crude PAN check example

  if (hasSensitiveData) {
    return {
      requiresHandoff: true,
      finalResponse:
        "I can help with payment issues, but I can’t process or repeat full card numbers here. I’m handing this to a specialist.",
    };
  }

  return {};
}

async function lookupPaymentStatus(state: GraphState): Promise<Partial<GraphState>> {
  // Replace with your internal payments service call
  return {
    maskedPaymentRef: "pay_****_4821",
    ticketId: "SUP-10482",
    finalResponse:
      state.intent === "refund_status"
        ? "Your refund is still pending at the processor. Typical settlement time is 3–5 business days."
        : "I found the payment record. The most likely issue is an issuer decline. Ask the customer to retry or use another card.",
  };
}

async function composeAnswer(state: GraphState): Promise<Partial<GraphState>> {
  if (state.finalResponse) return {};

  const prompt = [
    {
      role: "system",
      content:
        "You are a payments support assistant. Never reveal PCI data. Be concise. If uncertain or risky, hand off to a human.",
    },
    ...state.messages.map((m) => ({
      role: m instanceof HumanMessage ? ("user" as const) : ("assistant" as const),
      content: m.content.toString(),
    })),
    {
      role: "user",
      content:
        `Draft a customer-safe response for intent=${state.intent}. Include next step only if supported by evidence.`,
    },
  ];

  const res = await model.invoke(prompt);
  return { finalResponse: res.content.toString() };
}

3) Wire routing with conditional edges

The key pattern is to route early on risk. If the policy gate flags sensitive data or low confidence, stop automation and hand off.

function routeAfterPolicy(state: GraphState) {
  if (state.requiresHandoff) return "handoff";
  if (state.intent === "refund_status" || state.intent === "failed_payment") return "lookup";
  return "compose";
}

const graph = new StateGraph(SupportAnnotation)
  .addNode("classify", classifyIntent)
  .addNode("policy", policyCheck)
  
.addNode("lookup", lookupPaymentStatus)
  
.addNode("compose", composeAnswer)
  
.addNode("handoff", async () => ({
    finalResponse:
      "I’m transferring this to a payments specialist so they can review your case.",
    requiresHandoff: true,
}))
  
.addEdge(START, "classify")
  
.addEdge("classify", "policy")
  
.addConditionalEdges("policy", routeAfterPolicy, {
    lookup: "lookup",
    compose: "compose",
    handoff: "handoff",
})
  
.addEdge("lookup", "compose")
  
.addEdge("compose", END)
  
.addEdge("handoff", END);

const app = graph.compile();

4) Invoke it with real conversation input

In production you’ll pass session metadata separately in your app layer. Keep the graph focused on decisioning.

async function run() {
{
}
const result = await app.invoke({
    messages: [new HumanMessage("Why was my refund delayed?")],
});
console.log(result.finalResponse);
}

run().catch(console.error);

Production Considerations

  • Deployment

    • Run the agent behind an API that enforces authentication before any payment lookup happens.
  • Monitoring

  • Log node-level outcomes (classify, policy, lookup, handoff) with correlation IDs so audit teams can reconstruct every decision.

  • Guardrails

  • Block full PANs, CVV, bank account numbers, and raw dispute evidence from ever reaching the LLM prompt.

  • Data residency

  • Keep transaction lookups and message storage in-region. If you operate across EU/US/UK, don’t send regulated customer data to a model endpoint outside approved jurisdictions.

Common Pitfalls

  • Letting the model decide policy

  • Don’t ask the LLM whether it’s safe to answer. Make policy checks deterministic in code before generation.

  • Overloading graph state with raw payment data

  • Store only masked references and minimal facts needed for response generation. Full transaction payloads belong in secure backend services, not LangGraph state.

  • Skipping human handoff paths

  • Payments support always needs an escape hatch for disputes, identity mismatches, fraud signals, or ambiguous refund states. Build that path first, not last.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides