How to Build a customer support Agent Using LangGraph in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
customer-supportlanggraphtypescriptpension-funds

A customer support agent for pension funds handles member questions about contributions, retirement age, benefit statements, transfers, and payout rules without forcing every request through a human queue. It matters because the answers are regulated, auditable, and often time-sensitive; a bad response can create compliance risk, member confusion, or downstream operational work.

Architecture

  • Channel adapter

    • Receives chat/email/web form input and normalizes it into a single request shape.
    • Adds metadata like member ID, jurisdiction, language, and request source.
  • Intent router

    • Classifies the request into support categories like:
      • contribution history
      • retirement eligibility
      • statement requests
      • transfer-out questions
      • beneficiary updates
    • Sends low-risk FAQs to retrieval and high-risk cases to human review.
  • Policy and eligibility checker

    • Applies pension-fund-specific rules before any answer is returned.
    • Checks whether the agent is allowed to answer based on jurisdiction, account status, and request type.
  • Retrieval layer

    • Pulls from approved sources only:
      • fund handbook
      • member FAQ
      • contribution rules
      • benefit calculation policy
      • service-level scripts
    • Keeps the agent grounded in current policy instead of model memory.
  • Response composer

    • Produces the final answer with citations or source references.
    • Uses a strict tone for regulated content and avoids making unsupported promises.
  • Audit logger

    • Stores the input, routed path, retrieved documents, final response, and escalation reason.
    • This is non-negotiable for pension operations.

Implementation

1) Define the graph state and dependencies

For this use case, keep state explicit. You want a traceable path from user message to decision to response, especially when an auditor asks why the agent answered a transfer question one way and not another.

import { Annotation, END, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

type SupportState = {
  message: string;
  jurisdiction: string;
  intent?: "faq" | "benefit" | "transfer" | "complaint" | "human";
  risk?: "low" | "medium" | "high";
  retrievedContext?: string;
  answer?: string;
};

const State = Annotation.Root({
  message: Annotation<string>(),
  jurisdiction: Annotation<string>(),
  intent: Annotation<SupportState["intent"]>(),
  risk: Annotation<SupportState["risk"]>(),
  retrievedContext: Annotation<string>(),
  answer: Annotation<string>(),
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

2) Add routing nodes for intent and risk

Use small nodes that do one job each. In pension support, that makes it easier to test policy changes without rewriting the whole agent.

async function classifyIntent(state: typeof State.State) {
  const text = state.message.toLowerCase();

  if (text.includes("transfer") || text.includes("cash out")) {
    return { intent: "transfer", risk: "high" };
  }

  if (text.includes("retire") || text.includes("benefit")) {
    return { intent: "benefit", risk: "medium" };
  }

  if (text.includes("complaint") || text.includes("fraud")) {
    return { intent: "complaint", risk: "high" };
  }

  return { intent: "faq", risk: "low" };
}

async function retrievePolicy(state: typeof State.State) {
  const contextByIntent: Record<string, string> = {
    faq: "Approved FAQ: contribution dates, statement access, contact channels.",
    benefit:
      "Benefit policy: retirement estimates are indicative only; final values require verified service history.",
    transfer:
      "Transfer policy: transfers require identity verification and may be restricted by jurisdiction.",
    complaint:
      "Complaint policy: log case ID and escalate to human support within SLA.",
    human: "",
  };

  return {
    retrievedContext:
      contextByIntent[state.intent ?? "faq"] ?? contextByIntent.faq,
  };
}

3) Generate a constrained answer or escalate

This is where most teams get sloppy. For pension funds you should not let the model freestyle on regulated topics. If the request is high risk or needs account-specific data, route to human support.

async function respond(state: typeof State.State) {
  if (state.risk === "high") {
    return {
      answer:
        "I’ve routed this to a human specialist because this request may require identity verification or regulated handling.",
      intent: "human" as const,
    };
    }

  const prompt = `
You are a pension fund customer support assistant.
Use only the provided policy context.
Do not invent account-specific details.
If information is missing, say what is needed next.

User message:
${state.message}

Policy context:
${state.retrievedContext}
`;

  const result = await llm.invoke(prompt);

  return { answer: result.content as string };
}

4) Assemble the LangGraph workflow

This uses StateGraph, addNode, addEdge, addConditionalEdges, compile, and invoke. That is the actual pattern you want in production because it keeps routing explicit.

const workflow = new StateGraph(State)
  .addNode("classifyIntent", classifyIntent)
  .addNode("retrievePolicy", retrievePolicy)
  .addNode("respond", respond)
  
workflow.addEdge("__start__", "classifyIntent");

workflow.addConditionalEdges("classifyIntent", (state) => {
   if (state.intent === "complaint" || state.intent === "transfer") return "respond";
   return "retrievePolicy";
});

workflow.addEdge("retrievePolicy", "respond");
workflow.addEdge("respond", END);

const app = workflow.compile();

const result = await app.invoke({
   message: "Can I transfer my pension to another provider?",
   jurisdiction: "ZA",
});

console.log(result.answer);

Production Considerations

  • Compliance logging

    • Store every prompt, retrieved policy snippet, decision branch, and final response.
    • Keep immutable audit records with timestamps and correlation IDs.
  • Data residency

Keep member data in-region. If your pension fund operates in South Africa or the EU,
do not send personal data to a model endpoint outside approved jurisdictions unless your legal team has signed off on transfer controls.
  • Guardrails
Block answers that include:
- guaranteed returns
- tax advice outside approved scripts
- account balances without authenticated access
- benefit calculations without verified service history
  • Monitoring
Track:
- escalation rate by intent
- hallucination reports from QA sampling
- average handling time
- unresolved transfer/retirement queries
- retrieval hit rate against approved documents

Common Pitfalls

  1. Letting the model answer account-specific questions without verification

    • Fix it by routing anything involving balances, beneficiaries, transfers, or payouts to authenticated flows first.
  2. Using one giant prompt instead of graph nodes

    • Fix it by splitting classification, retrieval, policy checks, and response generation into separate nodes.
    • That gives you testable failure points and cleaner audit trails.
  3. Retrieving from unapproved sources

    • Fix it by indexing only fund-approved documents.
    • Do not mix public web content with internal pension policies unless you have explicit review controls.
  4. Skipping jurisdiction logic

    • Fix it by passing region into state and branching on local rules.
    • Pension obligations differ across countries, and the agent should reflect that before it speaks.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides