How to Build a transaction monitoring Agent Using CrewAI in TypeScript for fintech
A transaction monitoring agent watches payment events, scores them against risk rules and behavioral patterns, and escalates suspicious activity for review. In fintech, that matters because you need to catch fraud, AML signals, and policy breaches early without flooding analysts with false positives.
Architecture
- •
Event ingestion layer
- •Pulls transactions from Kafka, SQS, Pub/Sub, or a webhook endpoint.
- •Normalizes payloads into a common schema: customer, merchant, amount, currency, timestamp, device, geo.
- •
Risk context fetcher
- •Enriches the transaction with KYC profile, account age, historical velocity, chargeback history, sanctions hits, and device reputation.
- •Keeps the agent from making decisions on raw payment data alone.
- •
CrewAI agent layer
- •Uses a specialized
Agentfor triage and explanation. - •Uses
Taskobjects for scoring, rule checking, and case summarization. - •Orchestrates work through a
Crewwith sequential execution.
- •Uses a specialized
- •
Policy and compliance guardrails
- •Applies deterministic rules before or after LLM reasoning.
- •Enforces auditability: every decision needs a traceable rationale and source data references.
- •
Case management output
- •Writes results to your SIEM, case management system, or analyst queue.
- •Stores the risk score, reason codes, supporting evidence, and recommended action.
Implementation
1) Install dependencies and define the transaction schema
Use the TypeScript package for CrewAI plus a validator for input hygiene. In fintech systems, never let untyped event blobs reach the agent.
npm install @crewai/core zod
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
customerId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
merchantName: z.string(),
country: z.string().length(2),
timestamp: z.string().datetime(),
channel: z.enum(["card", "bank_transfer", "wallet"]),
deviceId: z.string().optional(),
});
export type Transaction = z.infer<typeof TransactionSchema>;
2) Create an agent that explains risk in analyst-friendly language
The agent should not “decide” in isolation. It should produce a structured assessment that your policy engine can validate.
import { Agent } from "@crewai/core";
export const transactionMonitorAgent = new Agent({
role: "Transaction Monitoring Analyst",
goal:
"Assess transaction risk using provided customer context and produce a concise AML/fraud rationale.",
backstory:
"You review payment events for fraud indicators, AML patterns, velocity anomalies, and policy violations.",
verbose: true,
});
3) Define tasks for scoring and case summarization
Use separate tasks so you can inspect each step independently. That gives you cleaner audit trails than one giant prompt.
import { Task } from "@crewai/core";
import { transactionMonitorAgent } from "./agent";
import type { Transaction } from "./schema";
export function buildMonitoringTasks(txn: Transaction) {
const riskAssessmentTask = new Task({
description: `
Review this transaction for fraud/AML risk.
Transaction:
${JSON.stringify(txn)}
Return:
- risk_level: low | medium | high
- reason_codes: array of short codes
- rationale: brief explanation
- recommended_action: allow | review | block
`,
expectedOutput:
"A structured risk assessment with reason codes and a recommended action.",
agent: transactionMonitorAgent,
});
const caseSummaryTask = new Task({
description: `
Summarize the alert for an investigator using only the transaction details and the prior assessment.
Include what triggered the alert and what evidence should be checked next.
`,
expectedOutput:
"An investigator-ready summary with next steps.",
agent: transactionMonitorAgent,
context: [riskAssessmentTask],
});
return [riskAssessmentTask, caseSummaryTask];
}
4) Run the crew and persist the result with audit metadata
This is the pattern you want in production: validate input first, run the crew second, then store both raw output and normalized decision fields.
import { Crew } from "@crewai/core";
import { TransactionSchema } from "./schema";
import { buildMonitoringTasks } from "./tasks";
async function monitorTransaction(input: unknown) {
const txn = TransactionSchema.parse(input);
const tasks = buildMonitoringTasks(txn);
const crew = new Crew({
agents: [],
tasks,
verbose: true,
process: "sequential",
});
const result = await crew.kickoff();
// Persist this to your case system / audit store.
return {
transactionId: txn.transactionId,
reviewedAt: new Date().toISOString(),
crewResult: result,
complianceTags: ["aml", "fraud", "audit-trail"],
dataResidencyRegion: "eu-west-1",
};
}
If you want this to behave like a real monitoring service, wrap monitorTransaction() behind an API or queue consumer. The key is that every alert must include:
- •input snapshot
- •model output
- •final policy decision
- •reviewer identity if manually overridden
Production Considerations
- •
Keep deterministic rules outside the LLM
- •Hard blocks like sanctions matches, duplicate card testing bursts, or blacklisted merchants should be enforced by code first.
- •Use CrewAI for explanation and triage; use your rules engine for non-negotiable controls.
- •
Log everything needed for audit
- •Store prompt inputs, task outputs, timestamps, model version, and policy outcome.
- •Regulators will ask why a transaction was flagged; “the model said so” is not acceptable.
- •
Respect data residency
- •Route EU customer data to EU-hosted infrastructure only.
- •If you use external model providers or tools inside CrewAI flows, verify where prompts and traces are stored.
- •
Put guardrails around analyst actions
- •The agent can recommend
review, but it should not auto-freeze accounts without policy approval. - •Add thresholds so only high-confidence cases trigger downstream enforcement.
- •The agent can recommend
Common Pitfalls
- •
Feeding raw PII into prompts
Mask account numbers, emails, phone numbers, and full addresses before sending context to the agent. Keep a secure mapping in your own systems if investigators need re-identification.
- •
Using one generic task for everything
Don’t ask one prompt to detect fraud, AML structuring, sanctions issues, and customer support disputes at once. Split by concern so reason codes stay clean and analysts can trust the output.
- •
Skipping post-processing validation
Never accept free-form text as a final decision. Parse the result into a strict schema like
{ risk_level; reason_codes; recommended_action }, then enforce your own business rules before creating an alert.
A good transaction monitoring agent does not replace compliance controls. It reduces analyst noise by turning messy event streams into explainable cases that fit your bank’s audit requirements and operational reality.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit