How to Build a claims processing Agent Using CrewAI in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
claims-processingcrewaitypescriptfintech

A claims processing agent for fintech takes a submitted claim, extracts the relevant facts, checks policy rules and transaction history, flags fraud signals, and produces a decision package for human review or auto-approval. It matters because claims are expensive operationally, and in fintech the difference between a clean audit trail and an untraceable LLM output is the difference between a controlled workflow and a compliance incident.

Architecture

  • Claim intake layer

    • Receives structured claim payloads from your API or queue.
    • Normalizes fields like claimant ID, transaction ID, amount, timestamp, merchant, and reason code.
  • Policy retrieval layer

    • Pulls product rules, coverage limits, exclusions, and jurisdiction-specific requirements.
    • Keeps policy text outside the prompt so you can version it and audit changes.
  • Verification agent

    • Checks internal systems for transaction status, account ownership, KYC state, and prior disputes.
    • Produces evidence-backed findings instead of free-form reasoning.
  • Fraud/risk agent

    • Looks for velocity patterns, duplicate claims, unusual geographies, device mismatch, and merchant anomalies.
    • Outputs a risk score plus explicit signals used to derive it.
  • Decision synthesizer

    • Combines policy checks and risk findings into approve/reject/manual-review outcomes.
    • Writes a structured decision object that downstream systems can persist.
  • Audit and logging layer

    • Stores prompts, tool calls, model outputs, timestamps, and final decisions.
    • Required for compliance reviews, chargeback disputes, and internal controls.

Implementation

1) Install CrewAI for TypeScript and define your claim schema

Use strict types first. If your input contract is loose, your agent will hallucinate around missing fields instead of failing fast.

npm install @crewai/crewai zod dotenv
import { z } from "zod";

export const ClaimSchema = z.object({
  claimId: z.string(),
  customerId: z.string(),
  transactionId: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
  reasonCode: z.string(),
  country: z.string(),
  submittedAt: z.string(),
});

export type ClaimInput = z.infer<typeof ClaimSchema>;

2) Create agents with narrow responsibilities

Do not build one “smart” agent that does everything. Split the workflow so each agent has one job and one output shape.

import { Agent } from "@crewai/crewai";

export const verifierAgent = new Agent({
  role: "Claims Verifier",
  goal: "Validate claim facts against internal records and policy constraints",
  backstory:
    "You verify claims using only provided system data. You never invent missing evidence.",
});

export const fraudAgent = new Agent({
  role: "Fraud Analyst",
  goal: "Identify fraud indicators in claims data using explicit signals",
  backstory:
    "You assess risk based on known patterns like duplicates, velocity spikes, geo mismatch, and account anomalies.",
});

export const decisionAgent = new Agent({
  role: "Claims Decision Maker",
  goal: "Produce an auditable final recommendation for a fintech claims workflow",
  backstory:
    "You synthesize verified facts into approve, reject, or manual_review decisions with rationale.",
});

3) Wire tasks into a sequential Crew

This is the actual pattern you want in production: verify first, then assess risk, then decide. Each task should force structured output so your API can persist it without parsing prose.

import { Task } from "@crewai/crewai";
import { Crew } from "@crewai/crewai";
import { ClaimSchema } from "./schema";

export function buildClaimsCrew(claimPayload: unknown) {
  const claim = ClaimSchema.parse(claimPayload);

  const verificationTask = new Task({
    description: `
      Validate this claim using provided transaction/account/policy context.
      Claim JSON:
      ${JSON.stringify(claim)}
      Return:
      - coverage_check
      - eligibility_check
      - required_missing_fields
      - evidence_summary
    `,
    expectedOutput:
      "A JSON object with coverage_check, eligibility_check, required_missing_fields, evidence_summary.",
    agent: verifierAgent,
    outputJson: true,
  });

  const fraudTask = new Task({
    description: `
      Review the same claim for fraud indicators.
      Return:
      - risk_score (0-100)
      - signals[]
      - recommended_action
    `,
    expectedOutput:
      "A JSON object with risk_score, signals array, recommended_action.",
    agent: fraudAgent,
    context: [verificationTask],
    outputJson: true,
  });

  const decisionTask = new Task({
    description: `
      Decide the claim outcome using the verification result and fraud assessment.
      Return:
      - decision (approve|reject|manual_review)
      - rationale
      - audit_notes
    `,
    expectedOutput:
      "A JSON object with decision, rationale, audit_notes.",
    agent: decisionAgent,
    context: [verificationTask, fraudTask],
    outputJson: true,
  });

  return new Crew({
    agents: [verifierAgent, fraudAgent, decisionAgent],
    tasks: [verificationTask, fraudTask, decisionTask],
    process: "sequential",
  });
}

4) Execute the crew from your service boundary

Keep execution behind an API route or queue consumer. That gives you retries, idempotency keys, and a place to enforce residency controls before any model call happens.

import "dotenv/config";
import { buildClaimsCrew } from "./crew";

async function main() {
  const crew = buildClaimsCrew({
    claimId: "CLM-10021",
    customerId: "CUS-7781",
    transactionId: "TXN-44591",
    amount: 249.99,
    currency: "USD",
    reasonCode: "UNAUTHORIZED_CARD_PRESENT",
    country: "US",
    submittedAt: new Date().toISOString(),
  });

  const result = await crew.kickoff();
  console.log(JSON.stringify(result));
}

main().catch((err) => {
   console.error("claims_crew_failed", err);
   process.exit(1);
});

Production Considerations

  • Enforce data residency before kickoff

    • Route EU claims to EU-hosted inference endpoints only.
    • Do not send PII or full account details to models outside approved jurisdictions.
  • Log every decision artifact

Store input payload hash, model version, task outputs, tool responses, and final disposition.

This is what makes audits survivable when finance asks why a claim was rejected.

  • Add deterministic guardrails

Reject malformed payloads with schema validation before CrewAI runs.

Use allowlisted tools only for balance checks, policy lookup, and dispute history.

Never let the agent call arbitrary HTTP endpoints.

  • Put humans on exception paths

Anything above a risk threshold should become manual_review.

Auto-approval should be limited to low-value, low-risk claims with complete evidence.

Common Pitfalls

  1. Using one generic agent for verification and decisioning

    • This creates blended reasoning and weak auditability.
    • Fix it by separating fact-finding from judgment into distinct agents and tasks.
  2. Letting the model infer missing compliance data

    • In fintech, “probably covered” is not acceptable when jurisdiction rules or KYC status are missing.
    • Fix it by forcing required_missing_fields in the verification output and rejecting incomplete claims early.
  3. Skipping structured outputs

    • Free-form text is hard to store, diff, or feed into downstream rules engines.
    • Fix it by using outputJson on tasks and validating the returned JSON before persistence or actioning.

If you build it this way, you get a claims agent that is useful in production instead of just impressive in a demo. The key is not making the model smarter; it’s making the workflow stricter than the model.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides