How to Build a customer support Agent Using LangChain in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
customer-supportlangchaintypescriptpension-funds

A customer support agent for pension funds answers member questions, retrieves policy-specific information, and drafts compliant responses without exposing sensitive data. It matters because pension operations are full of repetitive but high-stakes requests: contribution status, retirement eligibility, beneficiary changes, transfer rules, and statement explanations. One bad answer can create regulatory risk, so the agent needs retrieval, guardrails, and auditability from day one.

Architecture

  • Chat interface

    • Receives member questions from web chat, email triage, or internal support tooling.
    • Keeps the interaction scoped to support use cases, not open-ended advice.
  • Retriever over approved pension knowledge

    • Pulls from policy docs, FAQs, contribution rules, benefit guides, and process manuals.
    • Uses VectorStoreRetriever or a retriever returned by your vector store integration.
  • LLM response chain

    • Uses ChatOpenAI with a constrained prompt.
    • Produces answers grounded in retrieved context instead of free-form speculation.
  • Policy and compliance guardrail layer

    • Blocks financial advice, legal advice, and unsupported claims.
    • Enforces phrasing like “please contact your plan administrator” when needed.
  • Audit logging

    • Stores prompt, retrieved sources, model output, timestamps, and user/session IDs.
    • Required for complaint handling and regulatory review.
  • Member data access layer

    • Fetches account-specific facts only after authentication and authorization.
    • Keeps PII out of the model unless strictly necessary.

Implementation

1) Install the core packages

Use LangChain JS with a chat model and a vector store retriever. For production you’ll usually add your own auth and document pipeline on top.

npm install langchain @langchain/openai @langchain/community zod

Set your environment variables:

export OPENAI_API_KEY="your-key"

2) Build a retrieval-based support chain

This pattern uses RunnableSequence, RunnablePassthrough, and ChatPromptTemplate to ground answers in approved pension content. The example assumes you already indexed pension policy documents into a retriever.

import { ChatOpenAI } from "@langchain/openai";
import {
  ChatPromptTemplate,
  MessagesPlaceholder,
} from "@langchain/core/prompts";
import {
  RunnablePassthrough,
  RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";

// Replace this with your real retriever:
// e.g. vectorStore.asRetriever()
const retriever = {
  async invoke(query: string): Promise<Document[]> {
    return [
      {
        pageContent:
          "Members can request a transfer quote after completing identity verification.",
        metadata: { source: "transfer-policy.pdf", page: 4 },
      },
    ] as Document[];
  },
};

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a customer support agent for a pension fund.
Answer only using the provided context.
If the answer is not in context, say you cannot confirm it and direct the user to the plan administrator.
Do not give financial or legal advice.`,
  ],
  ["human", "Question: {question}\n\nContext:\n{context}"],
]);

const formatDocs = (docs: Document[]) =>
  docs
    .map(
      (doc) =>
        `[source=${doc.metadata.source}, page=${doc.metadata.page}] ${doc.pageContent}`
    )
    .join("\n\n");

const chain = RunnableSequence.from([
  {
    question: new RunnablePassthrough(),
    context: async (question: string) => formatDocs(await retriever.invoke(question)),
  },
  prompt,
  llm,
  new StringOutputParser(),
]);

async function main() {
  const answer = await chain.invoke(
    "Can I request a transfer quote without visiting the office?"
  );
  console.log(answer);
}

main().catch(console.error);

This is the core pattern you want in pension support:

  • retrieve approved content first
  • constrain the model with a strict system message
  • keep temperature at zero
  • return plain text or structured output depending on your channel

3) Add a compliance filter before responding

For pension funds, you need to catch requests that drift into regulated advice. A simple classifier can route those cases to human review or a safe fallback response.

function isHighRiskQuery(text: string): boolean {
  const riskyPatterns = [
    /should i/i,
    /best investment/i,
    /move my pension/i,
    /withdraw.*early/i,
    /guarantee/i,
    /legal/i,
    /tax advice/i,
  ];

  return riskyPatterns.some((pattern) => pattern.test(text));
}

async function handleSupportQuery(question: string) {
  if (isHighRiskQuery(question)) {
    return {
      answer:
        "I can help with plan information and service requests. For advice about withdrawals, transfers, or tax treatment, please speak with an authorized pension consultant or plan administrator.",
      escalationRequired: true,
    };
  }

  const answer = await chain.invoke(question);
  return { answer, escalationRequired: false };
}

In production you’d usually replace this with a more robust classifier using structured output from an LLM or your internal rules engine. The important part is that risky queries never go straight to a generative answer.

4) Log every interaction for audit

Pension support teams need traceability. Log the question, retrieved sources, model version, decision path, and final response in immutable storage.

type AuditRecord = {
  timestamp: string;
  userId: string;
  question: string;
  answer: string;
  sources?: Array<{ source: string; page?: number }>;
};

async function writeAudit(record: AuditRecord) {
   console.log(JSON.stringify(record));
}

async function handleAndAudit(userId: string, question: string) {
   const result = await handleSupportQuery(question);

   await writeAudit({
     timestamp: new Date().toISOString(),
     userId,
     question,
     answer: result.answer,
   });

   return result;
}

Production Considerations

  • Data residency

    • Keep member data and document indexes in-region if your regulator requires it.
    • If the fund operates across jurisdictions, separate indexes by region instead of mixing content.
  • Monitoring

    • Track fallback rate, escalation rate, hallucination reports, retrieval hit rate, and average resolution time.
    • Alert when the agent answers without citations or when sensitive-query volume spikes.
  • Guardrails

    • Block account actions unless the user is authenticated and authorized.
    • Redact PII before sending text to the model where possible.
    • Force human handoff for withdrawals, transfers between schemes, disputes, complaints, and tax questions.
  • Auditability

    • Store prompt templates by version.
    • Persist retrieved document IDs and timestamps so every answer can be reconstructed later.

Common Pitfalls

  1. Letting the model answer from memory

    • Pension support must be retrieval-first.
    • Fix it by grounding every response in approved documents and refusing unsupported answers.
  2. Mixing advice with support

    • Members will ask “what should I do?” even when they mean “what does the policy say?”
    • Fix it by hard-blocking advisory language and routing those cases to licensed staff.
  3. Ignoring document freshness

    • Pension rules change often through circulars, benefit updates, or regulatory notices.
    • Fix it by versioning documents and rebuilding embeddings whenever policy content changes.
  4. Skipping authorization checks

    • A support agent that reveals balances or beneficiary details without auth is a breach waiting to happen.
    • Fix it by placing identity verification before any account-specific tool call.

A pension fund support agent is useful only if it is boringly reliable. Keep it grounded in approved content, conservative on risky questions, and fully auditable end to end.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides