How to Build a fraud detection Agent Using AutoGen in TypeScript for investment banking

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionautogentypescriptinvestment-banking

A fraud detection agent in investment banking watches transaction streams, customer activity, and case notes for patterns that look like manipulation, account takeover, wash trading, layering, or suspicious fund movement. It matters because the cost of missing a real event is regulatory exposure, direct losses, and broken trust with counterparties and compliance teams.

Architecture

  • Event ingestion layer
    • Pulls trades, wire transfers, login events, KYC changes, and case management updates from Kafka, S3, or internal APIs.
  • Feature extraction service
    • Normalizes raw events into risk signals like velocity spikes, unusual counterparties, geo anomalies, and sanctions hits.
  • AutoGen agent runtime
    • Coordinates one or more agents using AssistantAgent and UserProxyAgent to triage alerts and produce structured findings.
  • Policy and guardrail layer
    • Enforces compliance rules: no PII leakage, no unsupported conclusions, deterministic thresholds for escalation.
  • Case output sink
    • Writes decisions into the case management system with audit metadata: reason codes, timestamps, model version, and source event IDs.
  • Observability stack
    • Captures prompts, tool calls, outputs, latency, and human overrides for audit and model risk review.

Implementation

1) Install AutoGen and define a strict alert schema

For TypeScript projects, use the AutoGen package that exposes AssistantAgent, UserProxyAgent, and OpenAIChatCompletionClient. Keep the output structured from the start. Investment banking teams need traceable outputs that can be stored in a case file without post-processing guesswork.

npm install @autogenai/autogen openai zod
import { z } from "zod";

export const FraudAlertSchema = z.object({
  alertId: z.string(),
  severity: z.enum(["low", "medium", "high", "critical"]),
  summary: z.string(),
  reasonCodes: z.array(z.string()).min(1),
  recommendedAction: z.enum(["monitor", "escalate", "freeze", "review"]),
  evidence: z.array(
    z.object({
      sourceEventId: z.string(),
      signal: z.string(),
      value: z.string()
    })
  ),
});

export type FraudAlert = z.infer<typeof FraudAlertSchema>;

2) Create an AutoGen assistant that reasons over bank events

The pattern here is simple: send a compact event bundle to an AssistantAgent, force a structured response format in the system message, then validate the result before any downstream action. This keeps the agent useful while staying inside compliance boundaries.

import {
  AssistantAgent,
  UserProxyAgent,
  OpenAIChatCompletionClient,
} from "@autogenai/autogen";

const llmClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY!,
});

const fraudAnalyst = new AssistantAgent({
  name: "fraud_analyst",
  modelClient: llmClient,
  systemMessage: `
You are a fraud triage analyst for an investment bank.
Only analyze the provided events.
Do not invent facts.
Return concise JSON with:
alertId, severity, summary, reasonCodes, recommendedAction, evidence.
Use reason codes relevant to AML/fraud monitoring.
`,
});

const orchestrator = new UserProxyAgent({
  name: "fraud_orchestrator",
});

3) Run a detection pass and validate output before escalation

This example shows the actual orchestration pattern. The agent reads transaction features and produces a decision that can be validated against your schema before it touches a case system or triggers an investigator workflow.

async function detectFraud(events: unknown[]): Promise<FraudAlert> {
  const prompt = JSON.stringify({
    alertId: `fraud-${Date.now()}`,
    context: {
      businessLine: "investment_banking",
      jurisdiction: "US",
      complianceConstraints: ["audit_trail_required", "pii_minimization", "data_residency_us"],
    },
    events,
    instruction:
      "Assess whether these events indicate suspicious activity. Return only valid JSON.",
  });

  const result = await orchestrator.initiateChat(fraudAnalyst, prompt);

  const content =
    typeof result === "string"
      ? result
      : JSON.stringify((result as any).messages?.at(-1)?.content ?? result);

  const parsed = FraudAlertSchema.parse(JSON.parse(content));
  return parsed;
}

async function main() {
  const sampleEvents = [
    {
      sourceEventId: "tx-88321",
      type: "wire_transfer",
      amountUsd: "9850000",
      counterpartyCountry: "KY",
      accountAgeDays: "12",
      velocity24hUsd: "14200000",
    },
    {
      sourceEventId: "login-5512",
      type: "auth_event",
      ipRiskScore: "91",
      geoDistanceKmFromUsualLocation: "1840",
    },
  ];

  const alert = await detectFraud(sampleEvents);
  console.log(alert);
}

main().catch(console.error);

4) Add a deterministic escalation rule outside the model

Do not let the LLM decide everything. In banking workflows, thresholds should remain explicit so auditors can reproduce outcomes. Use the agent for triage and explanation; use code for hard controls.

function shouldFreeze(alert: FraudAlert): boolean {
  return (
    alert.severity === "critical" &&
    (alert.reasonCodes.includes("VELOCITY_SPIKE") ||
      alert.reasonCodes.includes("ACCOUNT_TAKEOVER"))
  );
}

Production Considerations

  • Keep data residency explicit
    • Route EU client data to EU-hosted infrastructure and prevent cross-region prompt logging if your policy requires it.
  • Log every decision path
    • Persist input event IDs, prompt version, model version, output JSON hash, validation result, and human override status.
  • Use human-in-the-loop for material actions
    • Freezing accounts or blocking wires should require investigator approval unless policy allows automated holds on clear threshold breaches.
  • Monitor drift by business line
    • Fraud patterns differ between M&A advisory flows, prime brokerage accounts, treasury services, and syndicated lending.

Common Pitfalls

  • Letting the model free-form its answer
    • Avoid this by enforcing a schema with zod and rejecting anything that does not parse cleanly.
  • Sending raw PII into prompts
    • Mask names, account numbers, passport IDs, and addresses unless they are strictly required for the decision.
  • Using the agent as the final authority
    • The agent should triage and explain. Final enforcement must come from deterministic rules plus reviewer approval where required.
  • Ignoring auditability
    • If you cannot reconstruct why an alert was raised six months later during a regulatory review, the design is incomplete.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides