How to Build a fraud detection Agent Using CrewAI in TypeScript for retail banking
A fraud detection agent for retail banking watches transaction streams, account context, and customer behavior, then flags suspicious activity with a reasoned explanation. It matters because banks need faster triage without turning every anomaly into a false positive that burns ops time and frustrates customers.
Architecture
- •Transaction intake service
- •Pulls card swipes, ACH transfers, wire events, login events, and beneficiary changes from your event bus.
- •Risk enrichment layer
- •Adds customer tenure, device fingerprint, geo distance, historical spend profile, velocity features, and sanctions/PEP signals.
- •CrewAI agent
- •Uses a
Taskto evaluate each event and produce a structured fraud assessment with rationale and next action.
- •Uses a
- •Tooling layer
- •Exposes internal services as tools: customer lookup, transaction history search, case management write-back, and policy lookup.
- •Decision orchestrator
- •Converts the agent output into one of three actions: approve, step-up verify, or open fraud case.
- •Audit and compliance sink
- •Stores prompts, tool calls, outputs, model version, and final decision for review under bank retention rules.
Implementation
1) Install the TypeScript stack
Use the TypeScript CrewAI package plus a model provider SDK. In production I usually keep the agent thin and push all bank-specific logic into tools.
npm install @crewaii/crewai openai zod
npm install -D typescript ts-node @types/node
If your package name differs in your internal registry, keep the same pattern: Agent, Task, Crew, Process, and Tool are the core abstractions you want.
2) Define bank-safe tools
The agent should not query raw databases directly. Wrap internal services in tools so you can enforce authZ, masking, residency controls, and audit logging before the model sees anything.
import { z } from "zod";
import { Tool } from "@crewaii/crewai";
export const getCustomerProfile = new Tool({
name: "get_customer_profile",
description: "Fetches masked retail banking customer profile for fraud review",
schema: z.object({
customerId: z.string(),
}),
execute: async ({ customerId }) => {
// Replace with your internal API call
return {
customerId,
tenureMonths: 42,
kycRiskTier: "medium",
homeCountry: "ZA",
recentLoginFailures24h: 3,
deviceTrustScore: 0.71,
};
},
});
export const getTransactionHistory = new Tool({
name: "get_transaction_history",
description: "Returns recent transaction aggregates for anomaly detection",
schema: z.object({
accountId: z.string(),
daysBack: z.number().default(30),
}),
execute: async ({ accountId, daysBack }) => {
return {
accountId,
daysBack,
avgDailySpend: 840.25,
maxSingleTxn: 2500,
cashWithdrawalCount7d: 6,
newBeneficiaryAdded24h: true,
};
},
});
3) Build the fraud agent and task
This is the core pattern. The agent receives enriched context, calls approved tools if needed, then returns a structured assessment that your orchestration layer can consume.
import OpenAI from "openai";
import { Agent, Crew, Process, Task } from "@crewaii/crewai";
import { getCustomerProfile, getTransactionHistory } from "./tools";
const llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
const fraudAgent = new Agent({
role: "Retail Banking Fraud Analyst",
goal:
"Assess transaction risk using customer context and bank policy while minimizing false positives",
backstory:
"You review suspicious retail banking activity. You must be precise, conservative on compliance risk, and always explain why an event is risky.",
llm,
tools: [getCustomerProfile, getTransactionHistory],
});
const fraudTask = new Task({
description: `
Review this retail banking event:
- customerId: CUST_10291
- accountId: ACC_88310
- amountZAR: 18450
- channel: online_banking
- countryMismatch: true
- beneficiaryAgeMinutes: 12
- velocityLast1h: high
Return JSON with:
- riskScore (0-100)
- decision (approve | step_up | block_and_case)
- reasons (array of strings)
- recommendedAction (string)
`,
expectedOutput:
"Strict JSON with riskScore, decision, reasons, recommendedAction",
});
async function runFraudReview() {
const crew = new Crew({
agents: [fraudAgent],
tasks: [fraudTask],
process: Process.sequential,
verbose: true,
});
const result = await crew.kickoff();
console.log(result);
}
runFraudReview().catch(console.error);
4) Parse the decision and route it to banking systems
Do not let the LLM make final irreversible decisions by itself. Treat its output as an assessment signal that your policy engine validates before action.
type FraudDecision = {
riskScore: number;
decision: "approve" | "step_up" | "block_and_case";
};
function routeDecision(decisionText: string) {
const parsed = JSON.parse(decisionText) as FraudDecision;
if (parsed.riskScore >= 85 || parsed.decision === "block_and_case") {
return { action: "OPEN_FRAUD_CASE", queue: "fraud_ops" };
}
if (parsed.riskScore >= 60 || parsed.decision === "step_up") {
return { action: "STEP_UP_AUTH", method: "otp_or_app_push" };
}
return { action:"APPROVE", queue:"posting" };
}
Production Considerations
- •Data residency
- •Keep prompts and tool outputs in-region if your retail banking policy requires it. If you operate across jurisdictions, pin execution to approved regions per customer segment.
- •Auditability
- •Persist every
Taskinput/output pair plus tool invocation metadata. Fraud teams will ask why a case was opened; you need a replayable trail.
- •Persist every
- •Guardrails
- •Enforce JSON schemas on outputs and reject free-form responses. Put hard thresholds in deterministic code so the model cannot override policy limits.
- •Monitoring
- •Track false positives by channel, region, merchant category code, and customer segment. A good fraud model that blocks too much online banking traffic still creates operational loss.
Common Pitfalls
- •
Letting the agent see raw PII
- •Mask account numbers, IDs, addresses, and full card data before they hit the prompt. Use tokenized identifiers and only reveal what is needed for the task.
- •
Using the LLM as the final decision engine
- •The agent should recommend; your rules engine should decide. In retail banking you need deterministic controls for high-risk actions like blocking accounts or freezing cards.
- •
Skipping evidence capture
- •If you do not store tool results and prompt versions alongside each case outcome, you cannot defend decisions during audit or model review.
- •
Ignoring local compliance rules
- •Fraud workflows differ by market. Make sure SAR/STR escalation paths, retention windows, consent requirements, and cross-border data handling are encoded outside the prompt in policy code.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit