How to Build a claims processing Agent Using CrewAI in TypeScript for banking
A claims processing agent in banking takes an incoming claim, gathers the relevant account and policy context, checks it against internal rules, flags missing evidence, and drafts a decision packet for human review. It matters because claims are high-volume, time-sensitive, and compliance-heavy; the agent reduces manual triage while keeping auditability and control where banking teams need it.
Architecture
- •
Claim intake layer
- •Accepts structured inputs from a case system, webhook, or internal API.
- •Normalizes claim type, customer ID, account references, and supporting documents.
- •
Policy and rules retrieval
- •Pulls bank-specific policy text, product terms, fraud rules, and eligibility criteria.
- •Keeps the agent grounded in approved sources instead of free-form reasoning.
- •
Specialized agents
- •One agent for document review.
- •One agent for eligibility and rules interpretation.
- •One agent for compliance/audit summarization.
- •
Orchestration with CrewAI
- •Uses
Agent,Task, andCrewto coordinate work. - •Keeps each step deterministic enough for regulated workflows.
- •Uses
- •
Human approval boundary
- •The final decision is not auto-issued.
- •The crew produces a recommendation package for operations or claims officers.
- •
Audit logging
- •Stores prompts, retrieved context, outputs, timestamps, model version, and reviewer actions.
- •Required for banking traceability and dispute handling.
Implementation
1) Install dependencies and define the claim contract
Use the TypeScript CrewAI package plus a real LLM provider. The exact provider depends on your environment; keep it behind config so you can swap models without changing the workflow.
npm install @crewai/core zod dotenv
Define the claim payload up front. In banking, typed input is not optional; it is how you keep garbage out of the workflow.
import { z } from "zod";
export const ClaimSchema = z.object({
claimId: z.string(),
customerId: z.string(),
productType: z.enum(["credit_card", "personal_loan", "mortgage", "deposit_account"]),
claimType: z.enum(["fraud", "chargeback", "fee_dispute", "service_error"]),
amount: z.number().positive(),
currency: z.string().length(3),
submittedAt: z.string().datetime(),
evidenceUrls: z.array(z.string().url()).default([]),
});
export type ClaimInput = z.infer<typeof ClaimSchema>;
2) Create agents with narrow responsibilities
Do not build one giant agent that “handles claims.” Split responsibilities so each agent has a clear job and a smaller blast radius.
import { Agent } from "@crewai/core";
const claimsAnalyst = new Agent({
role: "Claims Analyst",
goal: "Assess claim completeness and summarize facts from provided evidence.",
backstory:
"You work in a regulated banking operations team. You only use supplied policy text and claim data.",
});
const policyReviewer = new Agent({
role: "Policy Reviewer",
goal: "Map the claim to eligibility rules and identify required exclusions or exceptions.",
backstory:
"You interpret banking product terms, fee schedules, fraud controls, and dispute policies.",
});
const complianceReviewer = new Agent({
role: "Compliance Reviewer",
goal: "Produce an audit-ready summary with decision rationale, data lineage, and escalation flags.",
backstory:
"You ensure every output is suitable for regulated review and record retention.",
});
3) Define tasks that produce structured outputs
The key pattern is to force each task to return something your system can persist. For banking workflows, free-text-only outputs are hard to audit.
import { Task } from "@crewai/core";
const assessClaimTask = new Task({
description: `
Review this claim payload:
{claim}
Identify:
- missing fields or evidence
- whether the claim appears complete
- any obvious risk indicators
Return a concise JSON object with keys:
status, missingItems, riskFlags, summary
`,
expectedOutput:
'{"status":"complete|incomplete|review_required","missingItems":[],"riskFlags":[],"summary":""}',
agent: claimsAnalyst,
});
const policyMatchTask = new Task({
description: `
Using the provided policy context only:
{policyContext}
Evaluate whether this claim appears eligible.
Return JSON with keys:
eligibilityVerdict, applicableRules, exceptionsNeeded, rationale
`,
expectedOutput:
'{"eligibilityVerdict":"eligible|ineligible|needs_review","applicableRules":[],"exceptionsNeeded":[],"rationale":""}',
agent: policyReviewer,
});
const complianceSummaryTask = new Task({
description: `
Create an audit-ready summary for internal review using prior task outputs.
Include:
- decision recommendation
- compliance concerns
- data residency notes if any external services were used
- human approval requirement
Return JSON with keys:
recommendation, complianceConcerns, auditNotes
`,
expectedOutput:
'{"recommendation":"","complianceConcerns":[],"auditNotes":""}',
agent: complianceReviewer,
});
4) Run the crew and persist the result
This is the orchestration layer. In practice you would inject your own LLM client through CrewAI’s configured model setup; keep secrets in environment variables and never hardcode them.
import { Crew } from "@crewai/core";
import { ClaimSchema } from "./claim-schema";
import dotenv from "dotenv";
dotenv.config();
async function processClaim(rawClaim: unknown) {
const claim = ClaimSchema.parse(rawClaim);
const policyContext = `
Product terms v4.2
- Fraud claims must be filed within required time window.
- Chargebacks require transaction reference and merchant descriptor.
- Fee disputes require account statement evidence.
- Exceptions require supervisor approval.
`;
const crew = new Crew({
agents: [claimsAnalyst, policyReviewer, complianceReviewer],
tasks: [assessClaimTask, policyMatchTask, complianceSummaryTask],
verbose: true,
memory: false,
planning: false,
inputs: {
claim,
policyContext,
},
});
const result = await crew.kickoff();
}
processClaim({
For production use:
{
claimId: "CLM-100245",
customerId: "CUST-88921",
productType: "credit_card",
claimType: "chargeback",
amount:100.0,
currency:"USD",
submittedAt:"2026-04-21T10 :00 :00.000Z",
evidenceUrls:["https://internal.example.com/evidence/receipt.pdf"]
})
.then(console.log)
.catch(console.error);
The important pattern here is not just “run agents.” It is:
- •validate before orchestration with
zod - •keep each task narrowly scoped
- •force structured outputs
- •store every input/output pair for audit
Production Considerations
- •
Deployment
- •Run the agent in a private VPC or private subnet.
- •Keep customer PII inside your approved region to satisfy data residency requirements.
- •If you use hosted model APIs, confirm contract terms for retention and training exclusions.
- •
Monitoring
- •Log task-level latency, token usage, rejection rates, escalation rates, and human override rates.
- •Track which policy version was used on every case.
- •
Guardrails
Keep an allowlist of sources for policy context. Do not let the agent browse arbitrary URLs or ingest customer-uploaded text without sanitization.
Set hard thresholds for auto-triage versus mandatory human review. In banking claims flows,
the safe default is “needs review” when confidence drops or evidence is incomplete.
Common Pitfalls
- •Using one broad prompt for everything
- •This makes outputs harder to test and harder to defend in audits.
- •Split intake, rules interpretation
and compliance summarization into separate tasks.
- •
Skipping schema validation
If you pass raw payloads straight into the crew, malformed timestamps or missing IDs will create noisy downstream failures.
Validate at the edge with zod before calling crew.kickoff().
- •
Letting the model make final decisions
In banking,
claims decisions need human accountability unless your legal team has explicitly approved automation for that case type.
Make the crew produce recommendations,
not final dispositions,
and log reviewer approval separately.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit