How to Build a fraud detection Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionautogentypescriptwealth-management

A fraud detection agent for wealth management watches client activity, flags suspicious patterns, and escalates cases with enough context for a compliance analyst to act fast. It matters because the cost of a false negative is regulatory exposure and client loss, while the cost of a false positive is broken advisor workflows and unnecessary friction on high-value accounts.

Architecture

  • Event ingestion layer

    • Pulls trade events, wire requests, profile changes, login anomalies, and advisor actions from your internal systems.
    • Normalizes them into a single case payload before the agent sees them.
  • Policy and rules engine

    • Applies hard controls first: sanctions hits, unusual beneficiary changes, large outbound transfers, and restricted jurisdiction checks.
    • Keeps deterministic decisions out of the LLM path.
  • AutoGen agent group

    • Uses AssistantAgent for investigation reasoning.
    • Uses UserProxyAgent or an internal orchestrator to execute approved tool calls.
    • Optionally adds a second assistant for compliance review.
  • Evidence retrieval layer

    • Fetches account history, KYC status, transaction velocity, device fingerprint, and prior alerts.
    • Returns only minimum necessary data to satisfy privacy and residency constraints.
  • Case management output

    • Produces a structured fraud assessment: risk score, rationale, evidence references, and next action.
    • Pushes results into your SIEM or case management system with an immutable audit trail.

Implementation

1) Install AutoGen and define the case schema

For TypeScript projects, use the AutoGen core package and keep your case payload strict. Wealth management systems need predictable inputs because you will eventually map these fields into audit logs and regulator-facing reports.

npm install @autogen/core zod
import { z } from "zod";

export const FraudCaseSchema = z.object({
  caseId: z.string(),
  clientId: z.string(),
  accountId: z.string(),
  jurisdiction: z.string(),
  eventType: z.enum([
    "wire_transfer",
    "trade_order",
    "beneficiary_change",
    "login_anomaly",
    "advisor_override",
  ]),
  amountUsd: z.number().nonnegative(),
  timestamp: z.string(),
  signals: z.array(z.string()),
});

export type FraudCase = z.infer<typeof FraudCaseSchema>;

2) Create an investigation agent with AutoGen

The key pattern is to keep the assistant constrained to analysis and let tools handle data access. In wealth management, that separation helps with auditability and reduces accidental disclosure of sensitive client data.

import { AssistantAgent } from "@autogen/core";

const fraudAnalyst = new AssistantAgent({
  name: "fraud_analyst",
  systemMessage: [
    "You are a fraud detection analyst for wealth management.",
    "Use only provided evidence.",
    "Do not invent facts.",
    "Return strict JSON with keys: riskLevel, summary, evidenceUsed, recommendedAction.",
    "Consider compliance issues such as AML, suitability abuse, unauthorized trading, account takeover, and beneficiary manipulation.",
    "If data is insufficient, recommend manual review.",
  ].join(" "),
});

3) Add tools for controlled evidence retrieval

This is where you enforce residency and least privilege. Your tool should read from approved internal services only, redact anything not needed for the decision, and log every access.

type Evidence = {
  recentTransfers: Array<{ date: string; amountUsd: number; counterpartyCountry: string }>;
  kycStatus: string;
  priorAlerts: number;
};

async function getEvidence(caseId: string): Promise<Evidence> {
  // Replace with internal API calls in your environment.
  return {
    recentTransfers: [
      { date: "2026-04-20T12:00:00Z", amountUsd: 250000, counterpartyCountry: "AE" },
    ],
    kycStatus: "enhanced_due_diligence_required",
    priorAlerts: 2,
  };
}

If you want the assistant to call tools through AutoGen rather than preloading evidence, register a tool in your app layer and expose only this narrow function. The important part is that the model never gets raw database access.

4) Run the analysis and persist an auditable result

The actual workflow is simple: validate input, fetch evidence, ask the assistant for a structured assessment, then store both prompt context and response in your audit system.

import { FraudCaseSchema } from "./schema";
import { fraudAnalyst } from "./agent";

async function analyzeFraudCase(rawCase: unknown) {
  const fraudCase = FraudCaseSchema.parse(rawCase);
  const evidence = await getEvidence(fraudCase.caseId);

  const prompt = `
Case:
${JSON.stringify(fraudCase)}

Evidence:
${JSON.stringify(evidence)}

Assess whether this looks like fraud or suspicious activity.
Return JSON only.
`;

  const result = await fraudAnalyst.run(prompt);

  // Persist result + input snapshot to immutable storage here.
  return result;
}

const output = await analyzeFraudCase({
  caseId: "FC-10422",
  clientId: "C-8831",
  accountId: "A-55019",
  jurisdiction: "US",
  eventType: "wire_transfer",
  amountUsd: 250000,
  timestamp: new Date().toISOString(),
  signals: ["new_counterparty", "high_value_after_profile_change"],
});

console.log(output);

If you want stronger separation between investigation and approval, add a second AssistantAgent that reviews the first agent’s conclusion before any alert is closed or escalated. That pattern works well when compliance wants two independent opinions on high-risk cases.

Production Considerations

  • Deployment

    • Keep the agent in a private network segment with outbound access only to approved internal APIs.
    • Pin model versions and store prompts alongside code so investigations are reproducible during audits.
  • Monitoring

    • Track false positive rate by event type, jurisdiction, advisor team, and client segment.
    • Alert on drift in decision patterns after product launches or policy changes.
  • Guardrails

    • Hard-block actions on sanctions hits or missing KYC regardless of model output.
    • Require human approval for wire recalls, beneficiary changes, or account restrictions.
  • Compliance

    • Log every input field used in a decision plus every tool call made by the agent.
    • Enforce data residency by routing EU client cases to EU-hosted services only.

Common Pitfalls

  1. Letting the LLM make final decisions on its own

    • Use it for triage and explanation.
    • Keep enforcement rules in deterministic code.
  2. Sending too much client data into prompts

    • Redact account numbers, tax IDs, notes unrelated to the case.
    • Pass only evidence needed for the specific alert type.
  3. Skipping auditability

    • Store prompt version, model version, tool outputs, timestamps, and final recommendation.
    • Without that trail you cannot defend decisions during internal review or regulatory exams.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides