How to Build a fraud detection Agent Using CrewAI in TypeScript for lending
A fraud detection agent for lending triages loan applications, flags suspicious patterns, and produces an auditable risk summary for underwriters or automated decisioning systems. It matters because fraud in lending is not just chargeback loss — it drives compliance exposure, bad credit decisions, and operational drag when weak signals are handled manually.
Architecture
- •Application intake service
- •Pulls borrower data from the LOS, KYC provider, bureau response, device fingerprinting, and bank statement parser.
- •Fraud analysis agent
- •Uses CrewAI
Agentto inspect the application for identity mismatch, document anomalies, velocity patterns, synthetic identity signals, and income inconsistencies.
- •Uses CrewAI
- •Task orchestration layer
- •Uses CrewAI
TaskandCrewto split analysis into discrete checks with a final consolidated verdict.
- •Uses CrewAI
- •Evidence retrieval tools
- •Custom TypeScript tools that query internal policy rules, applicant history, sanctions/watchlist results, and prior applications.
- •Decision output service
- •Writes structured results back to your lending workflow:
approve,manual_review, orreject, plus reasons and evidence.
- •Writes structured results back to your lending workflow:
- •Audit and monitoring pipeline
- •Stores prompts, tool outputs, model version, timestamps, and final recommendation for compliance review.
Implementation
- •
Install CrewAI for TypeScript and define your data contracts
Keep the input strict. Lending systems fail when the agent receives loosely typed JSON with missing fields or inconsistent identifiers.
npm install @crewai/core zodimport { z } from "zod"; export const LendingApplicationSchema = z.object({ applicationId: z.string(), fullName: z.string(), dateOfBirth: z.string(), ssnLast4: z.string().optional(), email: z.string().email(), phone: z.string().optional(), address: z.string(), employerName: z.string().optional(), monthlyIncome: z.number(), requestedAmount: z.number(), deviceId: z.string().optional(), ipAddress: z.string().optional(), bureauScore: z.number().optional(), bankStatementRiskFlags: z.array(z.string()).default([]), }); export type LendingApplication = z.infer<typeof LendingApplicationSchema>; export const FraudVerdictSchema = z.object({ decision: z.enum(["approve", "manual_review", "reject"]), riskScore: z.number().min(0).max(100), reasons: z.array(z.string()), evidence: z.array(z.string()), }); export type FraudVerdict = z.infer<typeof FraudVerdictSchema>; - •
Create a fraud analyst agent with explicit lending instructions
The agent should not “decide creditworthiness.” It should only assess fraud indicators and return a defensible recommendation.
import { Agent } from "@crewai/core"; export const fraudAnalyst = new Agent({ role: "Fraud Detection Analyst", goal: "Identify fraud indicators in lending applications and produce an auditable risk assessment.", backstory: "You are a senior lending fraud analyst focused on identity theft, synthetic identity, document manipulation, velocity abuse, and application inconsistency.", verbose: true, allowDelegation: false, }); - •
Define tasks that separate checks from final judgment
This pattern is better than one giant prompt. It makes the output easier to audit and easier to tune when compliance asks why a case was flagged.
import { Task } from "@crewai/core"; import { fraudAnalyst } from "./agent"; export function buildFraudTasks(applicationJson: string) { const reviewIdentity = new Task({ description: `Review this lending application for identity fraud signals:\n${applicationJson}\n\nCheck for mismatched identity fields, suspicious contact data, synthetic identity patterns, and inconsistent residence/employment details.`, expectedOutput: "A concise list of identity-related fraud concerns with evidence.", agent: fraudAnalyst, }); const reviewBehavior = new Task({ description: `Review this lending application for behavioral fraud signals:\n${applicationJson}\n\nCheck for velocity abuse, device reuse risk, IP anomalies, repeated attributes across applications, and bank statement inconsistencies.`, expectedOutput: "A concise list of behavioral fraud concerns with evidence.", agent: fraudAnalyst, context: [reviewIdentity], }); const finalVerdict = new Task({ description: "Combine the prior findings into a final lending fraud verdict using approve/manual_review/reject. Include a numeric risk score from 0 to 100.", expectedOutput: "Structured verdict with decision, risk score, reasons, and evidence.", agent: fraudAnalyst, context: [reviewIdentity, reviewBehavior], outputJsonSchema: { type: "object", properties: { decision: { type: "string" }, riskScore: { type: "number" }, reasons: { type: "array", items: { type: "string" } }, evidence: { type: "array", items: { type: "string" } }, }, required: ["decision", "riskScore", "reasons", "evidence"], }, }); return [reviewIdentity, reviewBehavior, finalVerdict]; } - •
Run the crew and enforce validation before writing back to the LOS
Validate inputs first. Then execute the crew. Then validate the model’s output again before you persist anything into underwriting workflows.
import { Crew } from "@crewai/core"; import { LendingApplicationSchema, FraudVerdictSchema } from "./schemas"; import { buildFraudTasks } from "./tasks"; export async function assessFraud(rawApplication: unknown) { const application = LendingApplicationSchema.parse(rawApplication); const applicationJson = JSON.stringify(application); const crew = new Crew({ agents:[/* agents are attached through tasks */], tasks: buildFraudTasks(applicationJson), verbose:true, }); const result = await crew.kickoff(); const parsed = FraudVerdictSchema.parse(result); return { ...parsed, applicationId: application.applicationId, reviewedAtUtc:new Date().toISOString(), }; }
Production Considerations
- •Keep PII inside your boundary
- •For lending workloads, run the model in an environment that matches your data residency requirements. Do not send raw SSNs or full bank statements to external systems unless your legal/compliance team has approved it.
- •Log every decision path
- •Store prompt text versions, tool responses, model version IDs if available, task outputs, timestamps, and final disposition. Auditability matters when adverse action or fair lending reviews happen.
- •Use hard guardrails on recommendations
- •The agent should return only
approve,manual_review, orreject. Final credit decisions should remain rule-based or human-approved unless your governance model explicitly allows automation.
- •The agent should return only
- •Monitor drift by segment
- •Track false positives by product type, channel, geography, income band, and device source. Fraud patterns change quickly in unsecured personal loans and thin-file borrowers.
Common Pitfalls
- •Mixing fraud detection with credit underwriting
- •Fraud is not affordability. Keep this agent focused on misrepresentation and suspicious behavior; do not let it infer protected-class proxies or make affordability judgments.
- •Skipping structured outputs
- •Free-form text is hard to audit and harder to automate. Force JSON output with a schema so downstream systems can validate it before actioning the result.
- •Passing raw unbounded context into one task
- •Large blobs of applicant data create noisy outputs. Split checks into identity, behavior, then synthesis so each step has a narrow job.
- •Ignoring explainability requirements
- •If you cannot point to specific evidence for a rejection or manual review queue entry at audit time, you have built a liability instead of a control.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit