How to Build a customer support Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
customer-supportautogentypescriptwealth-management

A customer support agent for wealth management handles client questions about portfolio performance, account access, fees, statements, and basic product guidance without exposing sensitive data or drifting into regulated advice. It matters because support in this domain is not just about deflecting tickets; it has to preserve compliance, create an audit trail, and route anything advisory or high-risk to a licensed human.

Architecture

  • Client-facing support agent

    • The entry point for chat, email triage, or authenticated portal requests.
    • Handles intent detection, summarization, and safe responses.
  • Policy/guardrail layer

    • Enforces what the agent can and cannot answer.
    • Blocks personalized investment recommendations, tax advice, and unsupported account actions.
  • Knowledge retrieval layer

    • Pulls from approved sources like FAQs, product docs, fee schedules, and operational runbooks.
    • Keeps responses grounded in firm-approved content.
  • Human escalation path

    • Routes cases that involve complaints, trading errors, suitability questions, or exceptions.
    • Preserves context so a relationship manager or service rep can continue the thread.
  • Audit and logging layer

    • Stores prompts, tool calls, retrieved documents, and final outputs.
    • Needed for supervision reviews, incident analysis, and regulatory evidence.
  • Data controls

    • Redacts PII where possible and restricts data movement by region.
    • Important for residency requirements and internal security policy.

Implementation

  1. Install AutoGen for TypeScript and define the support agents

For a wealth management support workflow, keep the assistant narrow. Use one assistant for client-facing responses and one user proxy to drive the conversation or test it in code.

npm install @autogenai/autogen openai
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";
import { OpenAI } from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const supportAgent = new AssistantAgent({
  name: "wealth_support_agent",
  modelClient: client,
  systemMessage: `
You are a customer support agent for a wealth management firm.
You may answer only operational questions about accounts, statements, fees,
platform navigation, document status, and general firm-approved information.
Do not provide personalized investment advice, tax advice, legal advice,
or instructions that could change account holdings.
If the request is advisory or high risk, escalate to a human advisor.
`,
});

const user = new UserProxyAgent({
  name: "client_proxy",
});
  1. Add a guardrail function before the agent responds

Do not rely on the model alone to decide what is safe. In wealth management you want deterministic checks for regulated topics before the LLM sees the request.

function classifyRequest(message: string) {
  const lower = message.toLowerCase();

  const blockedTopics = [
    "should i buy",
    "should i sell",
    "best stock",
    "tax loss harvesting",
    "avoid taxes",
    "guaranteed return",
    "beat the market",
    "portfolio allocation",
    "reallocate my assets",
  ];

  const isBlocked = blockedTopics.some((phrase) => lower.includes(phrase));

  return {
    isBlocked,
    reason: isBlocked ? "Potentially regulated advisory request" : null,
  };
}
  1. Run the AutoGen chat loop with escalation logic

This pattern keeps the assistant useful while preventing it from crossing into advice. If the message is blocked, return a short escalation response instead of asking the model to improvise.

async function handleSupportRequest(message: string) {
  const classification = classifyRequest(message);

  if (classification.isBlocked) {
    return {
      route: "human_escalation",
      response:
        "I can help with account service questions, but this request needs review by a licensed advisor or service specialist.",
      reason: classification.reason,
    };
  }

  const result = await user.initiateChat(supportAgent, message);

  return {
    route: "assistant_response",
    response: result.chatHistory.at(-1)?.content ?? "",
  };
}

async function main() {
  const ticket1 = await handleSupportRequest(
    "Where can I download my quarterly statement?"
  );

  console.log(ticket1);

  const ticket2 = await handleSupportRequest(
    "Should I move more money into tech stocks?"
  );

  console.log(ticket2);
}

main().catch(console.error);
  1. Ground responses in approved content

For production support flows you should connect retrieval before generation. AutoGen works best when you feed it only approved snippets from internal sources such as fee schedules or onboarding docs.

A practical pattern is:

  • retrieve top-k passages from an indexed knowledge base
  • attach them as context to the assistant
  • instruct the agent to answer only from those passages
  • escalate if no passage supports the request

That keeps answers consistent with policy and reduces hallucinations on things like custody timelines, statement availability windows, or wire transfer cutoffs.

Production Considerations

  • Deployment

    • Keep inference in-region if your firm has residency constraints.
    • Separate environments by business unit so retail support data does not mix with private wealth data.
  • Monitoring

    • Log every escalation trigger, refusal reason, retrieved document ID, and final answer.
    • Track false positives on guardrails; too many will frustrate clients and overload humans.
  • Guardrails

    • Block personalized recommendations unless a licensed workflow explicitly approves them.
    • Add explicit handling for complaints involving suitability, trading errors, wire fraud suspicion, or missing assets.
  • Auditability

    • Persist prompt versions and system messages alongside each response.
    • When compliance asks why an answer was given, you need the exact policy state that produced it.

Common Pitfalls

  • Letting the model answer advisory questions directly

    If you ask an LLM “what should I do with my portfolio,” it will often produce confident but non-compliant guidance. Fix this with deterministic topic filters plus escalation to a human advisor.

  • Skipping source grounding

    A generic chatbot will invent fee details or statement timelines when it lacks context. Fix this by retrieving only approved documents and forcing answers to cite internal source IDs or refuse when no source exists.

  • Ignoring audit requirements

    If you do not store prompts, tool calls, and final outputs together, compliance reviews become painful fast. Fix this by writing structured logs per ticket with timestamps, user identity references, region tags, and model version metadata.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides