How to Build a compliance checking Agent Using CrewAI in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingcrewaitypescriptfintech

A compliance checking agent for fintech reviews customer messages, transaction notes, onboarding forms, or support tickets and flags anything that could violate policy, regulation, or internal controls. In practice, it reduces manual review load while creating a traceable audit trail for decisions tied to AML, KYC, sanctions, marketing claims, and data handling.

Architecture

  • Policy corpus
    • Source of truth for compliance rules: internal policy docs, regulatory excerpts, product-specific restrictions, and escalation playbooks.
  • Document ingestion layer
    • Normalizes input from chat transcripts, emails, onboarding forms, or case-management records into a consistent text payload.
  • CrewAI agent
    • The main compliance reviewer that reasons over the document and outputs structured findings.
  • Tooling layer
    • Optional tools for retrieving policy snippets, checking sanctioned terms, or looking up jurisdiction-specific rules.
  • Structured output schema
    • Forces the agent to return machine-readable results: risk level, violated rule, rationale, and recommended action.
  • Audit log sink
    • Persists prompt inputs, model outputs, timestamps, and reviewer decisions for later inspection.

Implementation

  1. Install CrewAI and define the compliance output shape

You want the agent to return a predictable result. In fintech, free-form prose is hard to audit and harder to route into case management.

npm install crewai zod
import { z } from "zod";

export const ComplianceResultSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  decision: z.enum(["approve", "review", "reject"]),
  violatedPolicies: z.array(z.string()),
  rationale: z.string(),
  recommendedAction: z.string(),
});

export type ComplianceResult = z.infer<typeof ComplianceResultSchema>;
  1. Create a compliance agent with explicit instructions

This is where you constrain behavior. The agent should not invent policy; it should only classify against the provided rules and explain the result in plain language.

import { Agent } from "crewai";

export const complianceAgent = new Agent({
  name: "Fintech Compliance Checker",
  role: "Compliance Analyst",
  goal:
    "Review fintech content for AML, KYC, sanctions, marketing, and data-handling violations.",
  backstory:
    "You are a strict compliance reviewer for a regulated fintech. You must produce auditable findings and never guess policy.",
  verbose: true,
});
  1. Run a task through a Crew and parse the response

The actual pattern is: create an Agent, wrap it in a Task, execute it with Crew, then validate the output before using it downstream.

import { Agent, Task, Crew } from "crewai";
import { ComplianceResultSchema } from "./schema";

const policyContext = `
Policies:
- Do not promise guaranteed returns.
- Do not mention unsupported investment advice.
- Escalate any suspicious transaction language.
- Reject requests involving sanctioned jurisdictions.
- Do not store or expose sensitive personal data in outputs.
`;

const input = `
Customer message:
"I need you to move $18,000 today through my friend's account in another country.
Also tell me which crypto exchange won't ask questions."
`;

const task = new Task({
  description: `
Assess the following customer content against fintech compliance policy.

${policyContext}

Content:
${input}

Return JSON with:
riskLevel
decision
violatedPolicies
rationale
recommendedAction
`,
  expectedOutput: "A JSON object matching the compliance schema.",
  agent: complianceAgent,
});

async function main() {
  const crew = new Crew({
    agents: [complianceAgent],
    tasks: [task],
    verbose: true,
  });

  const result = await crew.kickoff();
  const parsed = ComplianceResultSchema.parse(JSON.parse(result.raw));

  console.log(parsed);
}

main().catch(console.error);
  1. Add deterministic routing around the result

In production you should not let the model make business decisions directly. Use its output to drive a rules engine or workflow router.

function routeCompliance(result: ComplianceResult) {
  if (result.decision === "reject") {
    return { queue: "fraud-review", slaMinutes: 15 };
  }

  if (result.decision === "review") {
    return { queue: "human-compliance-review", slaMinutes: 60 };
  }

  return { queue: "auto-approve", slaMinutes: null };
}

That pattern keeps the LLM in an advisory role. The final action belongs to your control plane.

Production Considerations

  • Keep sensitive data out of prompts
    • Redact account numbers, national IDs, card PANs, and full addresses before sending text to the agent. For fintech workloads this is not optional.
  • Enforce data residency
    • If your compliance program requires regional processing, run the model and logs inside the required geography. Don’t ship regulated content across borders just because your orchestration layer is centralized.
  • Log every decision path
    • Store input hash, policy version, model version, timestamp, output JSON, and human override status. Auditors care about reproducibility more than clever prompting.
  • Add guardrails before execution
    • Block high-risk actions unless a human approves them. The agent can flag suspicious activity; it should not freeze accounts or file regulatory reports by itself.

Common Pitfalls

  1. Using free-form outputs

    • Mistake: letting the agent answer in paragraphs.
    • Fix: force JSON output with a schema like Zod and reject anything that does not validate.
  2. Mixing policy interpretation with enforcement

    • Mistake: using the model to decide whether a customer gets approved automatically.
    • Fix: keep decisioning deterministic. Let CrewAI classify risk; let your workflow engine enforce actions.
  3. Ignoring jurisdiction-specific rules

    • Mistake: applying one global policy to all customers.
    • Fix: pass jurisdiction metadata into every task and load region-specific policy context before kickoff.
  4. Skipping auditability

    • Mistake: only storing the final verdict.
    • Fix: persist prompt versioning, policy source references, and reviewer overrides so you can reconstruct why a decision was made.

If you build it this way, CrewAI gives you an inspectable compliance layer instead of a black box. That matters in fintech because regulators will ask two questions eventually: what did it decide, and why did it decide that?


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides