How to Build a fraud detection Agent Using CrewAI in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
fraud-detectioncrewaitypescriptpension-funds

A fraud detection agent for pension funds scans member activity, contribution flows, benefit claims, and account changes to flag patterns that look wrong before money moves. For pension administrators, that matters because fraud usually shows up as small anomalies first: duplicate claims, identity changes, unusual withdrawal timing, or account takeover attempts that can slip past rules-based checks.

Architecture

  • Data ingestion layer

    • Pulls structured events from contribution systems, claims platforms, CRM, and IAM logs.
    • Normalizes records into a single case payload the agent can reason over.
  • Risk scoring tool

    • Applies deterministic checks first: velocity, duplicate bank accounts, address churn, suspicious payout timing.
    • Returns a score and explainable reasons for the LLM to use.
  • CrewAI agent

    • Acts as the investigation orchestrator.
    • Summarizes evidence, classifies risk level, and decides whether to escalate to a human reviewer.
  • Task pipeline

    • One task to analyze the case.
    • One task to produce an audit-ready incident note.
    • One task to recommend next actions.
  • Audit log store

    • Persists prompt inputs, tool outputs, model response, timestamps, and reviewer decisions.
    • Required for compliance and post-incident review.
  • Policy guardrail layer

    • Blocks sensitive data leakage.
    • Enforces residency rules and makes sure only approved fields are sent to the model.

Implementation

1) Install CrewAI for TypeScript and define your case schema

Use a narrow schema. Pension data is sensitive, so do not hand the model raw member profiles when you only need event-level evidence.

npm install @crewai/crewai zod
import { z } from "zod";

export const FraudCaseSchema = z.object({
  caseId: z.string(),
  memberId: z.string(),
  country: z.string(),
  eventType: z.enum(["withdrawal", "benefit_claim", "bank_change", "address_change", "contribution"]),
  amount: z.number().optional(),
  riskSignals: z.array(z.string()),
  recentEvents: z.array(
    z.object({
      ts: z.string(),
      type: z.string(),
      details: z.record(z.any())
    })
  ),
});

export type FraudCase = z.infer<typeof FraudCaseSchema>;

2) Create a deterministic risk tool

This keeps the agent grounded. In pension workflows, you want model reasoning on top of known signals, not instead of them.

import { Tool } from "@crewai/crewai";

export const scoreFraudRisk = new Tool({
  name: "score_fraud_risk",
  description: "Scores pension fraud risk using deterministic signals and returns explainable reasons.",
  func: async (input: string) => {
    const payload = JSON.parse(input) as {
      amount?: number;
      riskSignals: string[];
      recentEvents: Array<{ type: string; details: Record<string, any> }>;
    };

    let score = 0;
    const reasons: string[] = [];

    if (payload.riskSignals.includes("duplicate_bank_account")) {
      score += 35;
      reasons.push("Duplicate bank account detected");
    }

    if (payload.riskSignals.includes("address_churn")) {
      score += 20;
      reasons.push("Frequent address changes");
    }

    if (payload.recentEvents.filter(e => e.type === "bank_change").length > 1) {
      score += 25;
      reasons.push("Multiple bank detail changes in short window");
    }

    if ((payload.amount ?? 0) > 50000) {
      score += 15;
      reasons.push("High-value transaction");
    }

    return JSON.stringify({
      score,
      band: score >= 60 ? "high" : score >= 30 ? "medium" : "low",
      reasons,
    });
  },
});

3) Build the CrewAI agent and tasks

This is the actual pattern you want in production: one specialist agent with tight instructions and explicit outputs. Keep it focused on triage and escalation.

import { Agent, Task, Crew } from "@crewai/crewai";
import { FraudCaseSchema } from "./schema";
import { scoreFraudRisk } from "./tools";

const fraudInvestigator = new Agent({
  role: "Pension Fraud Investigator",
  goal:
    "Assess pension fund activity for potential fraud using evidence-based reasoning and produce an audit-ready recommendation.",
  backstory:
    "You review pension transactions, member profile changes, and claims activity. You never invent facts and always cite tool outputs.",
});

const analyzeTask = new Task({
  description:
    "Analyze the pension fraud case using the provided evidence. Use the risk scoring tool before making a recommendation.",
  expectedOutput:
    "A concise fraud assessment with risk band, key indicators, and whether human review is required.",
});

const reportTask = new Task({
  description:
    "Write an audit-ready incident note with facts only. Include case id, summary of indicators, recommended action, and compliance notes.",
  expectedOutput:
    "An incident note suitable for internal audit and operations review.",
});

const crew = new Crew({
  agents: [fraudInvestigator],
});

// Example case input
const input = FraudCaseSchema.parse({
  caseId: "CASE-10291",
  memberId: "MEM-77812",
  country: "ZA",
  eventType: "bank_change",
  amount: undefined,
  riskSignals: ["duplicate_bank_account", "address_churn"],
});

4) Run the crew and persist the result

In production you should store both the input slice and output. That gives you an audit trail for compliance reviews and dispute handling.

async function runFraudCheck() {
const result = await crew.kickoff({
    inputs: {
      caseId: input.caseId,
      memberId: input.memberId,
      country: input.country,
      eventType: input.eventType,
      riskSignalsJson: JSON.stringify(input.riskSignals),
      recentEventsJson: JSON.stringify(input.recentEvents),
    },
    tools: [scoreFraudRisk],
    tasksConfig: [analyzeTask.config(), reportTask.config()],
});

console.log(result);
}

runFraudCheck().catch(console.error);

Production Considerations

  • Keep data residency local

    • Pension data often has strict residency requirements.
    • Run model inference in-region or use an approved private deployment path.
    • Do not ship full member records across borders just to get a classification.
  • Log everything needed for audit

    • Store prompt inputs, tool outputs, final decision, model version, timestamp, and reviewer ID.
    • Auditors will ask why a case was escalated or closed; your logs need to answer that without reconstructing it manually.
  • Use human-in-the-loop thresholds

    • Auto-close only low-risk cases with strong deterministic evidence.
    • Anything involving benefit payout changes, bank detail updates, or identity mismatch should go to manual review.
  • Mask sensitive fields before the agent sees them

    • Replace names with internal IDs where possible.
    • Redact national ID numbers, full bank details, medical-related benefit metadata, and free-text notes that contain unrelated personal data.

Common Pitfalls

  • Sending too much raw PII into the model

    • Fix it by building a minimal case payload.
    • The agent needs signals and context, not full CRM dumps.
  • Using the LLM as the primary detector

    • Fix it by putting deterministic scoring first.
    • Let CrewAI handle reasoning and summarization after rules-based filters have done their job.
  • Skipping audit-grade outputs

    • Fix it by forcing structured incident notes with clear evidence references.
    • In pension operations, “the model said so” is not a defensible control position.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides