How to Build a compliance checking Agent Using CrewAI in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingcrewaitypescriptpayments

A compliance checking agent for payments reviews transaction context, customer data, merchant data, and policy rules before a payment is approved or escalated. In practice, it catches AML/KYC issues, sanctions hits, suspicious routing patterns, and policy violations early enough to stop bad transactions without blocking legitimate volume.

Architecture

  • Input adapter

    • Normalizes payment events from your gateway, ledger, or orchestration layer.
    • Extracts fields like amount, currency, sender/receiver country, merchant category code, and customer risk tier.
  • Policy retrieval layer

    • Pulls internal compliance rules, jurisdiction-specific controls, and bank policy documents.
    • Keeps the agent grounded in current rules instead of model memory.
  • Compliance analysis agent

    • Uses CrewAI Agent with a strict role like “Payments Compliance Analyst”.
    • Evaluates the transaction against sanctions, AML thresholds, velocity checks, and residency constraints.
  • Task orchestration

    • Uses CrewAI Task objects to split work into rule review, risk assessment, and final decision output.
    • Makes outputs easier to audit than one giant prompt.
  • Decision formatter

    • Returns structured JSON: approve, reject, or escalate, plus reasons and evidence.
    • This is what your payment workflow engine consumes.
  • Audit sink

    • Writes the full decision trail to immutable storage or an audit log.
    • Required for investigations, model governance, and regulator review.

Implementation

1) Install CrewAI for TypeScript and define the compliance schema

You want the agent output to be machine-readable from day one. For payments, free-form text is not acceptable because downstream systems need deterministic decisions.

npm install @crew-ai/crewai zod
import { z } from "zod";

export const PaymentComplianceInputSchema = z.object({
  paymentId: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
  senderCountry: z.string().length(2),
  receiverCountry: z.string().length(2),
  merchantCategoryCode: z.string(),
  customerRiskTier: z.enum(["low", "medium", "high"]),
  sanctionsMatch: z.boolean(),
  velocityLast24hCount: z.number().int().nonnegative(),
  residencyRegion: z.string(),
});

export const PaymentComplianceOutputSchema = z.object({
  decision: z.enum(["approve", "reject", "escalate"]),
  reasons: z.array(z.string()),
  controlsTriggered: z.array(z.string()),
  auditSummary: z.string(),
});

2) Create a compliance agent with explicit guardrails

CrewAI’s Agent should be narrow in scope. Don’t ask it to “be helpful”; ask it to apply payment compliance rules and produce a structured result.

import { Agent } from "@crew-ai/crewai";

export const complianceAgent = new Agent({
  name: "payments-compliance-agent",
  role: "Payments Compliance Analyst",
  goal:
    "Assess payment transactions against AML, sanctions, KYC, fraud policy, and data residency requirements.",
  backstory:
    "You review payment transactions for a regulated financial institution. You must prefer escalation when evidence is incomplete.",
  verbose: true,
});

3) Build tasks that separate rule review from final decisioning

Use multiple Tasks so the reasoning chain stays auditable. One task can summarize triggered controls; another can produce the final decision payload.

import { Task } from "@crew-ai/crewai";
import { PaymentComplianceInputSchema } from "./schemas";

export function buildComplianceTasks(input: unknown) {
  const tx = PaymentComplianceInputSchema.parse(input);

  const reviewTask = new Task({
    description: `
Review this payment for compliance issues:
${JSON.stringify(tx)}

Check sanctions exposure, AML risk signals, velocity anomalies,
merchant risk indicators, and data residency constraints.
Return only a concise findings summary.
`,
    expectedOutput:
      "A concise list of triggered controls and observed risks.",
    agent: complianceAgent,
    asyncExecution: false,
    context: [],
  });

  const decisionTask = new Task({
    description: `
Using the findings from the prior task, decide whether to approve,
reject, or escalate this payment. Return structured JSON with:
decision, reasons[], controlsTriggered[], auditSummary.
Do not invent facts.
`,
    expectedOutput:
      'Valid JSON matching { decision, reasons[], controlsTriggered[], auditSummary }',
    agent: complianceAgent,
    context: [reviewTask],
    asyncExecution: false,
  });

  return { tx, reviewTask, decisionTask };
}

4) Run the crew and validate the response before releasing the payment

The key pattern is simple: execute the crew, parse the result with Zod, then enforce a hard fail if the output is malformed or conflicts with your deterministic rules engine.

import { Crew } from "@crew-ai/crewai";
import { PaymentComplianceOutputSchema } from "./schemas";
import { buildComplianceTasks } from "./tasks";

export async function checkPaymentCompliance(input: unknown) {
  const { tx, reviewTask, decisionTask } = buildComplianceTasks(input);

  const crew = new Crew({
    name: `compliance-check-${tx.paymentId}`,
    agents: [complianceAgent],
    tasks: [reviewTask, decisionTask],
    verbose: true,
    processType: "sequential",
  });

  const result = await crew.kickoff();

  const parsed = PaymentComplianceOutputSchema.parse(result);

  
if (tx.sanctionsMatch && parsed.decision === "approve") {
    throw new Error("Policy violation: sanctions match cannot be approved");
}

  
return {
    paymentId: tx.paymentId,
    ...parsed,
};
}

For production payments flows, I also recommend a deterministic pre-check before CrewAI runs:

  • block obvious sanctions hits immediately
  • auto-escalate high-risk corridors
  • reject unsupported residency regions before any LLM call

That keeps latency down and avoids sending sensitive data into an agent when you already know the outcome.

Production Considerations

  • Data residency

    • Keep transaction payloads inside the correct region before calling the model.
    • If your policy says EU payment data must stay in-region, do not ship raw PII across borders for inference.
  • Auditability

    • Persist input payload hash, model version, prompt version, task outputs, and final decision.
    • Regulators care about why a payment was blocked as much as they care that it was blocked.
  • Monitoring

    • Track approval rate by corridor, false positive rate on sanctions screening overrides, escalation volume by merchant category code.
    • Alert when a new release changes rejection behavior materially.
  • Guardrails

    • Enforce schema validation on every output.
    • Add deterministic overrides for hard compliance rules so an LLM cannot approve prohibited payments.

Common Pitfalls

  • Letting the agent make final decisions on hard rules

If a transaction has a confirmed sanctions hit or violates residency policy, do not ask the model to “reason it out.” Hard-code those checks outside CrewAI and use the agent only for judgment calls.

  • Passing too much sensitive data into prompts

Avoid dumping full customer profiles or full account history into the task description. Pass only what is needed for compliance analysis and tokenize or redact identifiers where possible.

  • Skipping structured validation

If you accept plain text output from the crew and forward it into your payment switch, you will eventually ship malformed decisions. Validate with Zod or similar schema checks every time.

  • No separation between explainability and action

The explanation belongs in audit logs; the action belongs in your orchestration layer. Keep those concerns separate so you can change prompts without changing enforcement logic.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides