How to Build a customer support Agent Using LangGraph in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphtypescriptwealth-management

A customer support agent for wealth management handles the first line of client requests: account questions, statement explanations, product eligibility, transfer status, and policy-driven routing to a human advisor when needed. It matters because every response sits inside a compliance boundary: you need accurate answers, auditable decisions, and strict control over what the model can say about portfolios, performance, and regulated advice.

Architecture

  • Input classifier

    • Detects intent like balance inquiry, fee question, KYC update, trade status, or complaint.
    • Routes low-risk requests to self-service and high-risk requests to escalation.
  • Policy and compliance gate

    • Checks whether the request touches regulated advice, suitability, or restricted disclosures.
    • Blocks or rewrites responses that would violate internal policy.
  • Knowledge retrieval layer

    • Pulls approved content from product docs, fee schedules, onboarding guides, and service playbooks.
    • Uses only sanctioned sources with versioning for auditability.
  • Conversation state

    • Stores user context such as client tier, jurisdiction, language preference, and prior handoff state.
    • Keeps the graph deterministic across turns.
  • Response generator

    • Produces customer-facing replies using only retrieved facts plus safe templates.
    • Formats answers for support use cases, not investment recommendations.
  • Escalation node

    • Creates a handoff summary for advisors or operations staff.
    • Includes reason codes so compliance can review why escalation happened.

Implementation

1) Define the graph state and node contracts

For wealth management support, keep state explicit. Don’t hide compliance flags inside prompts; put them in typed state so every node can inspect them.

import { Annotation, START, END, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

type SupportState = typeof SupportStateAnnotation.State;

const SupportStateAnnotation = Annotation.Root({
  messages: Annotation<any[]>({
    default: () => [],
    reducer: (left, right) => left.concat(right),
  }),
  intent: Annotation<string>({
    default: () => "unknown",
    reducer: (_, next) => next,
  }),
  riskLevel: Annotation<"low" | "medium" | "high">({
    default: () => "low",
    reducer: (_, next) => next,
  }),
  retrievedContext: Annotation<string>({
    default: () => "",
    reducer: (_, next) => next,
  }),
  answer: Annotation<string>({
    default: () => "",
    reducer: (_, next) => next,
  }),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

This pattern gives you a typed state object that every node can read and update. In regulated environments, that’s better than passing unstructured blobs between prompts.

2) Add routing logic for support vs escalation

Use a classifier node to identify whether the request is informational or potentially regulated. For wealth management, “Can you tell me if I should rebalance?” is not the same as “What is my current fee schedule?”

async function classifyIntent(state: SupportState) {
  const lastMessage = state.messages[state.messages.length - 1]?.content ?? "";

  const prompt = [
    {
      role: "system",
      content:
        "Classify wealth management support requests. Return JSON with intent and riskLevel.",
    },
    {
      role: "user",
      content: lastMessage,
    },
  ];

  const response = await llm.invoke(prompt);
  const text = String(response.content);

  if (text.includes("advice") || text.includes("recommendation")) {
    return { intent: "advice_request", riskLevel: "high" as const };
  }

  if (text.includes("fee") || text.includes("statement") || text.includes("transfer")) {
    return { intent: "service_request", riskLevel: "low" as const };
  }

  return { intent: "unknown", riskLevel: "medium" as const };
}

async function escalate(state: SupportState) {
  return {
    answer:
      "I’m escalating this to a licensed advisor or service specialist for review.",
  };
}

In production you should parse structured JSON from the model instead of string matching. The point here is the graph pattern: classify first, then route based on policy.

3) Retrieve approved knowledge and generate a constrained reply

Only answer from approved sources. If retrieval returns nothing useful or the topic is high risk, stop and escalate.

async function retrieveApprovedContext(state: SupportState) {
  const intent = state.intent;

  // Replace with your internal vector store / document service.
   const contextByIntent: Record<string, string> = {
    service_request:
      "Approved policy excerpt: Transfer requests submitted before cutoff process on the next business day. Fee schedules are published in client portal section Fees v3.2.",
    unknown:
      "Approved policy excerpt: For account-specific questions outside standard service topics, route to an advisor.",
   };

   return {
     retrievedContext:
       contextByIntent[intent] ?? contextByIntent["unknown"],
   };
}

async function generateAnswer(state: SupportState) {
   const prompt = [
     {
       role: "system",
       content:
         "You are a wealth management support agent. Use only provided context. Do not give investment advice.",
     },
     {
       role: "user",
       content:
         `Customer message: ${state.messages[state.messages.length - 1]?.content}\n\nContext:\n${state.retrievedContext}`,
     },
   ];

   const response = await llm.invoke(prompt);

   return { answer: String(response.content) };
}

The hard rule is simple: if the model needs facts about fees, transfers, or account servicing, those facts must come from controlled content. That keeps your audit trail clean and reduces hallucinations.

4) Wire the graph with conditional edges

This is where LangGraph earns its keep. Route low-risk service work through retrieval and generation; send everything else to escalation.

const graph = new StateGraph(SupportStateAnnotation)
   .addNode("classify", classifyIntent)
   .addNode("retrieve", retrieveApprovedContext)
   .addNode("generate", generateAnswer)
   .addNode("escalate", escalate)
   .addEdge(START, "classify")
   .addConditionalEdges("classify", (state) =>
     state.riskLevel === "low" && state.intent === "service_request"
       ? "retrieve"
       : "escalate"
   )
   .addEdge("retrieve", "generate")
   .addEdge("generate", END)
   .addEdge("escalate", END);

export const supportAgent = graph.compile();

At runtime you pass in conversation messages and get back a deterministic path through the workflow. That makes it easier to explain behavior to compliance teams and easier to test than a single giant prompt.

Production Considerations

  • Audit every decision

    • Log intent classification, risk level, retrieved document IDs, final response text, and escalation reason.
    • Store immutable traces for supervisory review and dispute handling.
  • Data residency

    • Keep client messages and retrieval indexes inside approved regions.
    • If your firm operates across jurisdictions, partition state by region so EU data never leaves EU infrastructure.
  • Guardrails for regulated content

    • Block any attempt to produce performance forecasts, suitability guidance, or portfolio recommendations.
    • Add a second-pass policy checker before sending any customer-facing response.
  • Operational monitoring

    • Track escalation rate by intent category.
    • Alert on spikes in “advice_request” or repeated fallback-to-human events; those usually indicate bad routing or weak knowledge coverage.

Common Pitfalls

  • Using one prompt for everything

    This breaks down fast in wealth management because service questions and advice questions have different compliance rules. Split classification, retrieval, generation, and escalation into separate nodes.

  • Letting the model answer from memory

    If the assistant says “fees are usually…” without a source of truth, you’ve created an audit problem. Force all factual answers through approved documents or APIs.

  • Ignoring jurisdiction and client permissions

    A response allowed for one client segment may be forbidden for another. Put residency, product eligibility, advisory status, and region into state before any generation step.

A good wealth management support agent is not just “chat with tools.” It is a routed system with explicit policy checks at every hop. LangGraph fits this problem because it gives you control flow you can inspect instead of hoping a single prompt behaves like an operations workflow.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides