How to Build a customer support Agent Using AutoGen in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
customer-supportautogentypescriptfintech

A customer support agent for fintech handles account questions, transaction disputes, payment failures, card issues, and policy explanations without exposing sensitive data or making unsafe promises. It matters because support is where trust breaks first: if the agent gives wrong balance info, leaks PII, or invents a policy, you own the compliance incident.

Architecture

Build this agent with a small set of components that are easy to audit and replace:

  • User-facing support agent
    • Handles conversation flow, intent detection, and response generation.
  • Tool layer
    • Calls internal services for account status, transaction lookup, dispute creation, and ticket creation.
  • Policy/guardrail layer
    • Blocks PII leakage, disallowed advice, and unsupported actions like changing KYC status.
  • Conversation memory
    • Stores only what you need for the current session; avoid persisting raw sensitive data.
  • Audit logger
    • Writes every tool call, model response, and policy decision for compliance review.
  • Human handoff path
    • Escalates cases involving fraud, chargebacks, legal complaints, or identity verification failures.

Implementation

1) Install AutoGen and define your support tools

Use AutoGen’s TypeScript package and keep the tools thin. The model should never talk directly to your core banking systems.

npm install @autogenai/autogen openai zod
import { AssistantAgent } from "@autogenai/autogen";
import { z } from "zod";

type SupportContext = {
  customerId: string;
  locale: string;
};

const lookupAccountSchema = z.object({
  customerId: z.string(),
});

const createDisputeSchema = z.object({
  customerId: z.string(),
  transactionId: z.string(),
  reason: z.string(),
});

async function lookupAccount({ customerId }: { customerId: string }) {
  // Replace with a real internal API call
  return {
    customerId,
    status: "active",
    tier: "premium",
    last4: "4821",
    availableBalance: "1250.44",
    currency: "USD",
  };
}

async function createDispute(input: {
  customerId: string;
  transactionId: string;
  reason: string;
}) {
  // Replace with a real internal API call
  return {
    disputeId: `disp_${Date.now()}`,
    status: "received",
    etaDays: 5,
    transactionId: input.transactionId,
  };
}

2) Create an AssistantAgent with strict instructions

Keep the system message narrow. In fintech support, vague instructions lead to hallucinated policy answers.

const supportAgent = new AssistantAgent({
  name: "fintech_support_agent",
  modelClientConfig: {
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY!,
  },
  systemMessage: `
You are a fintech customer support agent.

Rules:
- Never request full card numbers, CVV, passwords, or OTPs.
- Never claim to have executed an action unless a tool returned success.
- For disputes, refunds, chargebacks, AML/KYC questions, or fraud reports, escalate when needed.
- Keep responses concise and factual.
- If data is missing or ambiguous, ask one targeted follow-up question.
`,
});

3) Register tools and wire them into the conversation

AutoGen agents can call registered functions through their tool interface. The pattern below keeps validation outside the model and gives you a clean audit boundary.

supportAgent.registerTool(
  {
    name: "lookup_account",
    description: "Fetch basic account summary for authenticated customers.",
    parametersSchema: lookupAccountSchema,
  },
  async (args) => {
    const parsed = lookupAccountSchema.parse(args);
    return await lookupAccount(parsed);
  }
);

supportAgent.registerTool(
  {
    name: "create_dispute",
    description:
      "Create a card or transaction dispute after the customer provides transaction details.",
    parametersSchema: createDisputeSchema,
  },
  async (args) => {
    const parsed = createDisputeSchema.parse(args);
    return await createDispute(parsed);
  }
);

Then run a single turn or a multi-turn session depending on your channel.

async function handleSupportMessage(context: SupportContext, message: string) {
  const result = await supportAgent.run([
    {
      role: "user",
      content:
        `Customer context:\ncustomerId=${context.customerId}\nlocale=${context.locale}\n\nMessage:\n${message}`,
    },
  ]);

  return result.messages.at(-1)?.content ?? "";
}

const reply = await handleSupportMessage(
await { customerId:"cus_123", locale:"en-US" },
"Can you check my balance and open a dispute for transaction tx_7788?"
);
console.log(reply);

4) Add an explicit handoff rule for risky cases

Do not let the model freestyle on fraud or regulatory topics. Route those to humans early.

function needsHandoff(text: string) {
  const riskySignals = [
    "fraud",
    "chargeback",
    "aml",
    "kyc",
    "sanctions",
    "legal complaint",
    "unauthorized transfer",
  ];
  
return riskySignals.some((signal) => text.toLowerCase().includes(signal));
}

async function routeMessage(context: SupportContext, message: string) {
if (needsHandoff(message)) {
return `I’m routing this to a specialist because it involves sensitive case handling.`;
}
return await handleSupportMessage(context, message);
}

Production Considerations

  • Deploy in-region

Store prompts, logs, and vector data in the same region as your regulated workload. For EU customers, keep processing inside EU boundaries unless your legal team has approved cross-border transfer terms.

  • Log everything relevant

Capture user input hashes, tool calls, tool outputs, model version, prompt versioning, and final responses. That gives you auditability when compliance asks why the agent answered a dispute question a certain way.

  • Add hard guardrails before generation

Run input filters for PANs, CVVs, OTPs, SSNs/NINs depending on market. Also block outputs that contain unredacted sensitive values or unsupported commitments like “your refund will arrive today.”

  • Separate support from decisioning

The agent can explain policies and gather facts. It should not approve chargebacks, override KYC outcomes, or change account risk flags without deterministic service authorization.

Common Pitfalls

  1. Letting the model access raw core banking APIs

    This creates uncontrolled behavior and weak audit trails. Put a tool facade in front of every backend system and validate inputs with schemas before execution.

  2. Storing full conversation history with sensitive data

    Support chats often contain card fragments, addresses, and identity details. Redact at ingestion and store only what you need for QA and compliance review.

  3. Treating every issue as answerable by the model

    Fraud claims, sanctions screening questions, legal disputes, and identity verification failures need human review. Build an escalation path into the first response instead of trying to be clever.

  4. Skipping policy versioning

    Fintech support policies change often across products and jurisdictions. Version your system prompts and tool contracts so you can reproduce exactly what the agent saw when it answered a customer.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides