How to Build a compliance checking Agent Using CrewAI in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingcrewaitypescriptinsurance

A compliance checking agent for insurance reviews policy language, claims notes, underwriting decisions, and customer communications against internal rules and regulatory constraints. It matters because small wording mistakes can create regulatory exposure, bad-faith claims risk, or inconsistent treatment across customers and jurisdictions.

Architecture

  • Input adapters

    • Pull structured data from policy admin systems, claims platforms, CRM notes, and document stores.
    • Normalize inputs into a single case payload before the agent runs.
  • Compliance rule context

    • Load insurer-specific rules: product guidelines, state-level restrictions, approved wording, escalation thresholds.
    • Keep this separate from model prompts so it can be versioned and audited.
  • CrewAI agent layer

    • Use one primary compliance agent to evaluate the case.
    • Add a second reviewer agent for escalation or contradiction checks when risk is high.
  • Task orchestration

    • Break the work into discrete tasks: extract facts, check rules, produce findings.
    • Make each task output structured JSON for downstream systems.
  • Audit and evidence store

    • Persist the input snapshot, rule version, model output, and final decision.
    • Insurance teams need traceability for internal audit and regulator review.
  • Human review queue

    • Route uncertain cases to compliance officers.
    • The agent should recommend actions, not make final legal determinations.

Implementation

1) Install CrewAI for TypeScript and define your case schema

Start by modeling the payload the agent will inspect. Keep it explicit; insurance workflows fail when you pass unstructured blobs around.

// package.json dependencies:
// "crewai": "^0.1.x",
// "zod": "^3.23.x"

import { z } from "zod";

export const InsuranceCaseSchema = z.object({
  caseId: z.string(),
  lineOfBusiness: z.enum(["auto", "home", "life", "health", "commercial"]),
  jurisdiction: z.string(), // e.g. "CA", "NY", "TX"
  documentType: z.enum(["policy", "claim_note", "customer_email", "underwriting_note"]),
  text: z.string(),
  metadata: z.object({
    createdAt: z.string(),
    sourceSystem: z.string(),
    customerType: z.enum(["retail", "commercial"]),
    sensitiveDataPresent: z.boolean(),
  }),
});

export type InsuranceCase = z.infer<typeof InsuranceCaseSchema>;

This schema gives you a hard boundary before anything reaches an LLM. For insurance, that boundary matters because you do not want raw PII drifting into prompts without control.

2) Create the compliance agent and tasks

CrewAI’s TypeScript API follows the same pattern as Python: define Agent, Task, and Crew. The key is to constrain the role tightly so the output stays in compliance-review territory.

import { Agent, Task, Crew } from "crewai";

const complianceAgent = new Agent({
  role: "Insurance Compliance Reviewer",
  goal:
    "Review insurance content for policy violations, regulatory risk, and required escalation.",
  backstory:
    "You review policy language, claims communication, and underwriting notes for insurer compliance teams.",
  verbose: true,
});

const factExtractionTask = new Task({
  description:
    "Extract key facts from the insurance case and identify any potentially risky statements.",
  expectedOutput:
    'JSON with fields: facts[], riskyStatements[], missingInfo[], jurisdictionFlags[]',
  agent: complianceAgent,
});

const complianceReviewTask = new Task({
  description:
    "Check the extracted facts against insurance compliance concerns including unfair treatment, prohibited wording, disclosure gaps, and jurisdictional issues.",
  expectedOutput:
    'JSON with fields: findings[], severity["low"|"medium"|"high"], recommendedAction[], auditNotes[]',
  agent: complianceAgent,
});

In production I usually split extraction and review into separate agents. For a first pass, one well-scoped reviewer is enough if your prompts are strict and your outputs are structured.

3) Run the crew with a real case payload

The important part is to pass only sanitized content. If you need PII for decisioning, tokenize it upstream and keep the mapping in your internal system of record.

import { InsuranceCaseSchema } from "./schema";
import { Agent, Task, Crew } from "crewai";

async function runComplianceCheck(rawCase: unknown) {
  const insuranceCase = InsuranceCaseSchema.parse(rawCase);

  const complianceAgent = new Agent({
    role: "Insurance Compliance Reviewer",
    goal:
      "Review insurance content for policy violations, regulatory risk, and required escalation.",
    backstory:
      "You assess insurance communications for internal policy adherence and jurisdictional concerns.",
    verbose: true,
    allowDelegation: false,
    memory: false,
  });

  const task = new Task({
    description: `
Review this insurance case:

Case ID: ${insuranceCase.caseId}
Line of business: ${insuranceCase.lineOfBusiness}
Jurisdiction: ${insuranceCase.jurisdiction}
Document type: ${insuranceCase.documentType}

Content:
${insuranceCase.text}

Return concise findings with severity and recommended action.
`,
    expectedOutput:
      'JSON with fields: summary, findings[], severity["low"|"medium"|"high"], escalate:boolean',
    agent: complianceAgent,
  });

  const crew = new Crew({
    agents: [complianceAgent],
    tasks: [task],
    verbose: true,
  });

  const result = await crew.kickoff();
  return result;
}

That kickoff() call is the execution point you wire into your API handler or workflow engine. Persist both the input snapshot and the returned result so auditors can reconstruct what happened later.

4) Add an escalation path for high-risk cases

High-risk insurance content should not stop at one model pass. Use a second review step when severity is high or when confidence is low enough that a human needs to sign off.

async function processInsuranceCompliance(rawCase: unknown) {
  const result = await runComplianceCheck(rawCase);

  
   // Example shape depends on your CrewAI version/configuration.
   // In practice parse the model output into your own DTO here.
   const parsed = typeof result === "string" ? JSON.parse(result) : result;

   if (parsed.severity === "high" || parsed.escalate === true) {
     return {
       status: "needs_human_review",
       reason: parsed.summary,
       auditTrailRequired: true,
     };
   }

   return {
     status: "approved_for_next_step",
     reason: parsed.summary,
   };
}

The handoff logic belongs outside the model. That keeps your control plane deterministic even when model behavior changes after an upgrade.

Production Considerations

  • Deployment
    • Run the agent in a private network segment with restricted egress.
    • Keep prompt templates and rule packs versioned alongside application code.
  • Monitoring
    • Log every kickoff() invocation with case ID, rule version, model version, latency, token usage, and final disposition.
    • Track false positives by line of business and jurisdiction; those numbers tell you where prompts are too broad.
  • Guardrails
    • Redact or tokenize PII before sending text to the model.
    • Block unsupported decisions like claim denial language unless a human-approved rule explicitly allows it.
  • Data residency
    • Route EU or state-restricted data to approved regions only.
    • Do not mix jurisdictions in shared caches or long-lived memory stores.

Common Pitfalls

  • Using one generic prompt for all lines of business

    Auto claims language is not life underwriting language. Split prompts or rule packs by product so the agent does not apply irrelevant checks.

  • Letting the model decide final outcomes

    The agent should flag risk and recommend action. Final approval or denial must stay with approved business logic or human reviewers.

  • Skipping audit snapshots

    If you cannot reproduce what text was checked against which rule version on which date, your compliance workflow will fail under scrutiny. Store inputs, outputs, timestamps, and policy versions together.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides