How to Build a fraud detection Agent Using AutoGen in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionautogentypescriptlending

A fraud detection agent for lending reviews applications, flags suspicious patterns, and routes high-risk cases for human review before money moves. It matters because lending fraud shows up as synthetic identities, income inflation, document tampering, and collusive behavior, and a missed case becomes a direct credit loss plus compliance exposure.

Architecture

  • Input adapter
    • Normalizes application payloads from LOS, CRM, document OCR, and bureau pulls into one schema.
  • Risk rule layer
    • Applies deterministic checks first: identity mismatch, velocity signals, device reuse, duplicate SSNs, income-to-loan anomalies.
  • AutoGen agent orchestrator
    • Uses AssistantAgent to reason over structured evidence and produce a fraud assessment with rationale.
  • Tool layer
    • Exposes callable functions for bureau lookup, KYC status, sanctions screening, and internal case history.
  • Case decision service
    • Converts the agent output into approve, review, or decline with thresholds that compliance can audit.
  • Audit sink
    • Persists prompts, tool calls, model outputs, and final decisions for model risk management and regulatory review.

Implementation

1) Install AutoGen and define the lending case schema

Use the TypeScript package for AutoGen and keep the application data explicit. In lending, vague free-text inputs create audit problems fast.

npm install @autogenai/agent
import { AssistantAgent } from "@autogenai/agent";

type LendingApplication = {
  applicationId: string;
  fullName: string;
  dateOfBirth: string;
  ssnLast4: string;
  email: string;
  phone: string;
  address: string;
  requestedAmount: number;
  declaredMonthlyIncome: number;
  employerName: string;
  bureauScore?: number;
  deviceFingerprint?: string;
};

type FraudDecision = {
  riskLevel: "low" | "medium" | "high";
  action: "approve" | "review" | "decline";
  reasons: string[];
};

2) Add deterministic checks before the agent reasons

Don’t ask the model to rediscover obvious rules. In lending fraud workflows, deterministic controls should front-run LLM reasoning so you have stable behavior and cleaner audits.

function preScreen(app: LendingApplication): string[] {
  const findings: string[] = [];

  if (app.declaredMonthlyIncome <= 0) {
    findings.push("Declared income is invalid.");
  }

  const incomeToAmountRatio = app.requestedAmount / Math.max(app.declaredMonthlyIncome * 12, 1);
  if (incomeToAmountRatio > 8) {
    findings.push("Requested amount is high relative to stated income.");
  }

  if (app.bureauScore !== undefined && app.bureauScore < 580) {
    findings.push("Low bureau score.");
  }

  if (app.email.endsWith("@mailinator.com") || app.email.endsWith("@tempmail.com")) {
    findings.push("Disposable email domain.");
  }

  return findings;
}

3) Create an AssistantAgent that returns a structured fraud assessment

The key pattern is to constrain the agent to a narrow task: assess evidence and return a decision with reasons. Keep the prompt focused on lending controls like identity consistency, synthetic identity signals, document anomalies, and auditability.

const fraudAgent = new AssistantAgent({
  name: "lending_fraud_agent",
  systemMessage: `
You are a lending fraud detection analyst.
Assess application evidence for fraud risk using only the provided data and tool results.
Return concise reasoning tied to observable signals.
Consider synthetic identity risk, identity mismatch, velocity indicators,
document inconsistency, bureau anomalies, and policy violations.
Do not invent facts. If evidence is insufficient, recommend human review.
`,
});

4) Run the assessment and map it to an operational decision

This is where you turn model output into something your underwriting or fraud ops team can consume. Keep the final decision logic outside the model so compliance can change thresholds without retraining anything.

async function assessFraud(app: LendingApplication): Promise<FraudDecision> {
  const findings = preScreen(app);

  const prompt = `
Application:
${JSON.stringify(app, null, 2)}

Deterministic findings:
${JSON.stringify(findings, null, 2)}

Task:
Return JSON with keys riskLevel (low|medium|high), action (approve|review|decline), reasons (string[]).
`;

  const result = await fraudAgent.run(prompt);
  
  const text = typeof result === "string" ? result : JSON.stringify(result);
  
    // Minimal parser pattern; in production use strict JSON schema validation.
    const parsed = JSON.parse(text) as FraudDecision;

    return {
      riskLevel: parsed.riskLevel,
      action:
        parsed.riskLevel === "high"
          ? "decline"
          : parsed.riskLevel === "medium"
            ? "review"
            : "approve",
      reasons: [...findings, ...parsed.reasons],
    };
}

async function main() {
  const app: LendingApplication = {
    applicationId: "APP-10021",
    fullName: "Jordan Lee",
    dateOfBirth: "1991-04-18",
    ssnLast4: "4821",
    email: "jordan.lee@example.com",
    phone: "+14155550177",
    address: "88 Market St, San Francisco, CA",
    requestedAmount: 25000,
    declaredMonthlyIncome:2000,
    employerName:"Acme Logistics",
    bureauScore:552,
    deviceFingerprint:"dfp_9f81a7",
 };

 const decision = await assessFraud(app);
 console.log(decision);
}

main().catch(console.error);

Production Considerations

  • Deploy in-region

Use a region-bound runtime if your lending stack has residency requirements. Keep PII inside approved jurisdictions and avoid sending raw documents across borders.

  • Log every decision path

Persist input hashes, deterministic rule hits, agent prompt version, tool outputs, and final disposition. This is what you need when a borrower disputes an adverse action or compliance asks why a file was declined.

  • Put hard guardrails around outcomes

The agent should never auto-decline on its own. For borderline cases use review, then let policy engines or human investigators make the final call.

  • Monitor drift by segment

Track false positives by channel such as brokered loans, mobile apps, thin-file borrowers, and refinance flows. Fraud patterns change by product line faster than generic metrics will show it.

Common Pitfalls

  1. Letting the LLM replace rules

    • Mistake: asking the agent to do everything from scratch.
    • Fix: run deterministic checks first and use the model for synthesis and explanation.
  2. Using unstructured outputs in production

    • Mistake: accepting free-form text as a decision.
    • Fix: require JSON output and validate it before any downstream action.
  3. Ignoring compliance artifacts

    • Mistake: storing only the final score.
    • Fix: store prompts, tool calls, timestamps, model versioning, and rationale so adverse action reviews are defensible.
  4. Over-sharing PII with tools

    • Mistake: passing full application payloads to every function call.
    • Fix: minimize fields per tool invocation and redact where possible; keep SSNs, bank details, and document images tightly scoped.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides