How to Build a fraud detection Agent Using AutoGen in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionautogentypescriptpension-funds

A fraud detection agent for pension funds watches transactions, member profile changes, benefit requests, and advisor activity for patterns that look off. It matters because pension fraud is usually slow, low-and-slow, and expensive: a missed beneficiary change or unauthorized withdrawal can create regulatory exposure, financial loss, and a long audit trail you cannot explain after the fact.

Architecture

  • Event ingestion layer

    • Pulls alerts from transaction systems, CRM updates, claims workflows, and document queues.
    • Normalizes events into a single schema before they hit the agent.
  • Policy and rules service

    • Encodes pension-specific controls like beneficiary change thresholds, early withdrawal flags, address change plus payout combinations, and advisor override limits.
    • Keeps deterministic checks outside the model.
  • AutoGen investigation agent

    • Uses AssistantAgent to analyze the event bundle and produce a risk assessment.
    • Calls tools for history lookup, policy checks, and case creation.
  • Evidence retrieval layer

    • Fetches member history, KYC records, prior claims, device fingerprints, and interaction logs.
    • Keeps the LLM grounded in actual data.
  • Case management sink

    • Writes structured findings to a case system with full traceability.
    • Stores model output, tool outputs, timestamps, and rule hits for audit.
  • Human review gate

    • Escalates high-risk cases to an analyst before any blocking action.
    • Required for regulated workflows where false positives are expensive.

Implementation

1) Install AutoGen and define your event model

Use the TypeScript AutoGen package and keep your event schema strict. Pension fund workflows break when you pass loose JSON around.

npm install @autogen/core zod
import { z } from "zod";

export const FraudEventSchema = z.object({
  memberId: z.string(),
  eventType: z.enum([
    "beneficiary_change",
    "address_change",
    "withdrawal_request",
    "advisor_override",
    "bank_account_update"
  ]),
  amount: z.number().optional(),
  country: z.string(),
  timestamp: z.string(),
  metadata: z.record(z.any()).default({})
});

export type FraudEvent = z.infer<typeof FraudEventSchema>;

2) Build tools for policy checks and evidence lookup

The agent should not guess. It should call tools that query real systems or approved read replicas. In AutoGen, expose these as functions the assistant can invoke.

import { AssistantAgent } from "@autogen/core";

async function getMemberHistory(memberId: string) {
  return {
    priorBeneficiaryChanges: 3,
    priorWithdrawals: 0,
    lastAddressChangeDaysAgo: 2,
    kycStatus: "verified"
  };
}

async function checkPolicy(eventType: string, amount?: number) {
  if (eventType === "withdrawal_request" && (amount ?? 0) > 50000) {
    return { riskRuleHit: true, reason: "Large withdrawal above threshold" };
  }
  if (eventType === "beneficiary_change") {
    return { riskRuleHit: true, reason: "Beneficiary change requires review" };
  }
  return { riskRuleHit: false };
}

const fraudAgent = new AssistantAgent({
  name: "pension-fraud-investigator",
  modelClient: {
    // plug in your OpenAI-compatible model client here
    createChatCompletion: async () => {
      throw new Error("model client not configured");
    }
  },
});

3) Register tools and run an investigation prompt

This is the core pattern. The assistant gets the event plus tool results, then returns a structured recommendation. Keep the output format fixed so downstream systems can parse it.

const tools = {
  getMemberHistory,
  checkPolicy
};

async function investigate(event: FraudEvent) {
  const history = await tools.getMemberHistory(event.memberId);
  const policy = await tools.checkPolicy(event.eventType, event.amount);

  const prompt = `
You are investigating a pension fund fraud alert.

Event:
${JSON.stringify(event, null, 2)}

Member history:
${JSON.stringify(history, null, 2)}

Policy result:
${JSON.stringify(policy, null, 2)}

Return JSON with:
- riskLevel: low | medium | high
- reasons: string[]
- recommendedAction: review | hold | approve
- auditSummary: string
`;

  const result = await fraudAgent.run(prompt);
  return result;
}

If you want stronger control over tool execution in production workflows, use AssistantAgent with explicit tool calling rather than free-form prompting only. The important part is that every decision is backed by retrieved evidence and policy output.

4) Route high-risk cases to humans

Do not let the agent auto-block member benefits without review. Pension operations need a human gate for anything that affects payments or member rights.

async function processFraudAlert(rawEvent: unknown) {
  const event = FraudEventSchema.parse(rawEvent);
  
   const assessment = await investigate(event);

   const text = String(assessment);
   const requiresReview = text.includes('"riskLevel":"high"') || text.includes('"recommendedAction":"hold"');

   if (requiresReview) {
     return {
       status: "escalated",
       queue: "fraud-review",
       payload: assessment
     };
   }

   return {
     status: "cleared",
     payload: assessment
   };
}

Production Considerations

  • Data residency

    • Keep member data inside approved regions.
    • If your fund operates across jurisdictions, pin model endpoints and vector stores to the correct geography.
  • Auditability

    • Persist every prompt, tool call, policy hit, and final recommendation.
    • Regulators will want to know why a withdrawal was held or why a beneficiary change was escalated.
  • Monitoring

riskLevel distribution
false positive rate by event type
analyst override rate
tool failure rate
latency per investigation

Track drift by comparing agent recommendations against analyst outcomes over time.

  • Guardrails

Limit the agent to read-only access on source systems. Use deterministic rules for hard stops like sanctions hits or missing KYC. Never allow model output to directly trigger payment release.

Common Pitfalls

  1. Using the LLM as the first line of defense

    • Fix it by running rules before model analysis.
    • High-signal controls like duplicate bank account changes should never depend on model judgment.
  2. Skipping provenance on evidence

    • Fix it by attaching source IDs and timestamps to every retrieved record.
    • If an analyst cannot trace where a claim came from, the case is weak in audit.
  3. Ignoring pension-specific edge cases

    • Fix it by encoding scenarios like early retirement withdrawals, spouse consent rules, trustee approvals, and cross-border residency checks.
    • Generic fraud logic misses the operational reality of pension administration.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides