How to Build a transaction monitoring Agent Using CrewAI in TypeScript for insurance
A transaction monitoring agent for insurance watches policy payments, claims payouts, refunds, premium adjustments, and beneficiary changes for patterns that look wrong. It matters because the cost of missing suspicious activity is not just fraud loss; it’s regulatory exposure, bad claims decisions, and weak audit trails.
Architecture
- •
Transaction ingestion layer
- •Pulls events from policy admin systems, claims platforms, payment processors, and message queues.
- •Normalizes records into a single schema:
transactionId,customerId,amount,currency,type,timestamp,jurisdiction.
- •
Risk scoring agent
- •Evaluates each transaction against rules and LLM-based reasoning.
- •Flags anomalies like rapid premium reversals, repeated small refunds, or mismatched beneficiary updates.
- •
Investigation agent
- •Collects supporting context from internal systems.
- •Summarizes why a case was flagged and what evidence supports the alert.
- •
Compliance guardrail layer
- •Enforces policy rules before any alert is written out.
- •Redacts sensitive data and blocks unsupported recommendations.
- •
Case management sink
- •Pushes alerts into a queue, ticketing system, or SIEM.
- •Stores an immutable audit record for review and regulator requests.
Implementation
1) Install dependencies and define the transaction schema
For TypeScript, keep the agent runtime small and explicit. You want typed inputs, deterministic outputs, and a clean boundary between raw insurance data and what the agent sees.
npm install @crewai/core zod dotenv
npm install -D typescript tsx @types/node
// src/types.ts
import { z } from "zod";
export const InsuranceTransactionSchema = z.object({
transactionId: z.string(),
customerId: z.string(),
type: z.enum([
"premium_payment",
"claim_payout",
"refund",
"beneficiary_change",
"policy_adjustment",
]),
amount: z.number().positive(),
currency: z.string().length(3),
jurisdiction: z.string(),
timestamp: z.string(),
metadata: z.record(z.any()).optional(),
});
export type InsuranceTransaction = z.infer<typeof InsuranceTransactionSchema>;
2) Create CrewAI agents with clear responsibilities
Use one agent for risk analysis and another for compliance review. That separation matters in insurance because you need explainability and a defensible control point before anything leaves the system.
// src/crew.ts
import { Agent, Task, Crew } from "@crewai/core";
export const riskAgent = new Agent({
role: "Transaction Risk Analyst",
goal: "Detect suspicious insurance transactions with concise evidence-based reasoning.",
backstory:
"You analyze insurance payments, claims, refunds, and policy changes for fraud indicators.",
});
export const complianceAgent = new Agent({
role: "Insurance Compliance Reviewer",
goal: "Validate that alerts meet compliance requirements and avoid unsupported conclusions.",
backstory:
"You review flagged transactions for auditability, privacy constraints, and jurisdictional controls.",
});
export function buildCrew(transactionJson: string) {
const riskTask = new Task({
description: `
Analyze this insurance transaction and return:
1. risk_score from 0 to 100
2. flag_reason
3. evidence bullets
4. recommended_action
Transaction:
${transactionJson}
`,
expectedOutput:
"A structured assessment with a numeric score and short evidence list.",
agent: riskAgent,
});
const complianceTask = new Task({
description: `
Review the risk assessment output for compliance issues.
Check for:
- unsupported accusations
- missing audit rationale
- privacy/data minimization issues
- jurisdiction concerns
Return only approved or rejected with reason.
`,
expectedOutput:
"A compliance decision with a short explanation suitable for audit logs.",
agent: complianceAgent,
context: [riskTask],
});
return new Crew({
agents: [riskAgent, complianceAgent],
tasks: [riskTask, complianceTask],
verbose: true,
});
}
3) Run the crew against validated input
The pattern here is simple: validate first, stringify second, execute third. Do not let raw payloads go straight into the agent without schema checks.
// src/index.ts
import "dotenv/config";
import { InsuranceTransactionSchema } from "./types";
import { buildCrew } from "./crew";
async function main() {
const raw = {
transactionId: "tx_10291",
customerId: "cust_4412",
type: "claim_payout",
amount: 18500,
currency: "USD",
jurisdiction: "US-NY",
timestamp: new Date().toISOString(),
metadata: {
claimId: "clm_8891",
channel: "manual_override",
priorRefunds30d: 2,
bankAccountChanged24hAgo: true,
},
};
const tx = InsuranceTransactionSchema.parse(raw);
const crew = buildCrew(JSON.stringify(tx));
const result = await crew.kickoff();
console.log(JSON.stringify(result, null, 2));
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
4) Add an output contract before sending alerts downstream
In production you should not trust free-form text. Parse the final output into a strict object before writing to your case system or SIEM.
import { z } from "zod";
export const AlertSchema = z.object({
decision: z.enum(["approved", "rejected"]),
riskScore: z.number().min(0).max(100),
});
If your CrewAI output is textual, wrap it with a post-processing step that extracts only fields you allow. That keeps your audit trail stable when model wording changes.
Production Considerations
- •
Data residency
- •Keep underwriting data, claim details, and customer identifiers inside approved regions.
- •If you operate across EU/US/APAC boundaries, route crews per jurisdiction instead of centralizing all traffic.
- •
Auditability
- •Log every input hash, task prompt version, model version, and final decision.
- •Store the exact evidence used for each alert so compliance can reconstruct the decision later.
- •
Guardrails
- •Redact PII before sending text to the agent where possible.
- •Block outputs that contain legal conclusions like “fraud confirmed”; use “requires human review” instead.
- •
Monitoring
- •Track false positive rate by transaction type.
- •Alert on drift in scores after policy rule changes or claims process updates.
Common Pitfalls
- •
Sending raw insurance records directly to the model
- •This leaks unnecessary PII and makes audits messy.
- •Fix it by validating with Zod first and stripping fields not needed for analysis.
- •
Using one agent for detection and compliance
- •A single prompt tends to mix suspicion with approval logic.
- •Split risk analysis from compliance review so each step has a narrow job.
- •
Treating LLM output as final truth
- •The model should flag cases; it should not adjudicate fraud or deny claims.
- •Always route high-risk cases to a human investigator and persist an immutable audit record.
- •
Ignoring jurisdiction-specific rules
- •Insurance operations are not uniform across states or countries.
- •Encode region-specific thresholds and retention rules outside the prompt so they can be versioned and tested.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit