How to Build a claims processing Agent Using CrewAI in TypeScript for insurance
A claims processing agent triages incoming insurance claims, extracts the facts from documents, checks policy context, and routes the case to the right outcome: straight-through processing, request-for-more-info, or human review. For insurers, this matters because claims is where cost, cycle time, and customer trust all collide.
Architecture
- •Claim intake service
- •Accepts FNOL payloads, uploaded documents, and metadata from your claims portal or API gateway.
- •Document extraction layer
- •Pulls structured fields from PDFs, images, emails, and adjuster notes before the agent reasons over them.
- •CrewAI orchestration layer
- •Uses
Agent,Task, andCrewto split work between intake validation, policy verification, fraud signals, and decision drafting.
- •Uses
- •Policy and rules service
- •Provides coverage limits, deductibles, exclusions, jurisdiction rules, and claim handling thresholds.
- •Audit and evidence store
- •Persists prompts, tool outputs, model decisions, timestamps, and human overrides for regulatory review.
- •Case management integration
- •Pushes final disposition into Guidewire/Duck Creek/Salesforce or your internal claims workflow.
Implementation
1) Install dependencies and define a typed claim payload
You want strong typing at the boundary. Claims systems fail when payloads drift between intake forms, OCR output, and policy data.
npm install @crewaii/core zod dotenv
import { z } from "zod";
export const ClaimSchema = z.object({
claimId: z.string(),
policyNumber: z.string(),
claimantName: z.string(),
lossDate: z.string(),
lossType: z.enum(["auto", "property", "health", "life", "liability"]),
jurisdiction: z.string(),
description: z.string(),
documents: z.array(
z.object({
name: z.string(),
url: z.string()
})
)
});
export type ClaimInput = z.infer<typeof ClaimSchema>;
2) Create agents for intake validation and claims analysis
Use separate agents for separation of concerns. One agent checks completeness; another assesses coverage and next action. That keeps prompts smaller and makes audit trails easier to explain.
import "dotenv/config";
import { Agent } from "@crewaii/core";
export const intakeAgent = new Agent({
role: "Claims Intake Validator",
goal: "Validate claim completeness and identify missing information",
backstory:
"You work in an insurance claims operations team. You verify FNOL data quality before downstream processing.",
});
export const analysisAgent = new Agent({
role: "Claims Analyst",
goal: "Assess coverage signals and recommend a claim disposition",
backstory:
"You analyze claim facts against policy context and operational rules. You flag uncertain cases for human review.",
});
3) Define tasks with explicit outputs for auditability
Make the output shape predictable. In insurance workflows you need structured results that can be stored, reviewed, and replayed later.
import { Task } from "@crewaii/core";
export const validateTask = new Task({
description:
"Review the claim input and identify missing fields required for first notice of loss processing.",
expectedOutput:
"A JSON object with missingFields[], riskFlags[], and recommendedNextStep.",
});
export const analyzeTask = new Task({
description:
"Assess the claim for likely straight-through processing eligibility using the provided claim facts.",
expectedOutput:
"A JSON object with disposition (approve|request_info|human_review), rationale[], and evidence[]",
});
4) Run the crew and persist the result
This is the core pattern. The crew takes the validated payload, executes tasks in sequence, then returns a structured result you can push into your claims platform.
import { Crew } from "@crewaii/core";
import { ClaimSchema } from "./schema";
import { intakeAgent, analysisAgent } from "./agents";
import { validateTask, analyzeTask } from "./tasks";
async function processClaim(rawInput: unknown) {
const claim = ClaimSchema.parse(rawInput);
const crew = new Crew({
agents: [intakeAgent, analysisAgent],
tasks: [validateTask, analyzeTask],
verbose: true,
memory: false,
});
const result = await crew.kickoff({
inputs: {
claimId: claim.claimId,
policyNumber: claim.policyNumber,
claimantName: claim.claimantName,
lossDate: claim.lossDate,
lossType: claim.lossType,
jurisdiction: claim.jurisdiction,
description: claim.description,
documents: JSON.stringify(claim.documents),
},
});
return result;
}
In production you would wrap processClaim() with:
- •identity checks on the caller
- •document virus scanning
- •PII redaction where required
- •durable logging of inputs/outputs
- •idempotency keys per
claimId
Production Considerations
- •Keep data residency explicit
- •Claims data often contains regulated PII/PHI. Pin model execution to approved regions and avoid sending documents to non-compliant endpoints.
- •Store a full audit trail
- •Persist prompt versions, tool responses, task outputs, timestamps, model identifiers, and final human decisions. Regulators care about why a decision was made.
- •Add guardrails before auto-disposition
- •Never let the agent finalize payment on high-severity losses, suspicious fraud patterns, litigation notices, or bodily injury claims without human review.
- •Monitor operational drift
- •Track straight-through rate, manual override rate, average handling time, missing-field frequency, and denial appeal rates by line of business.
Common Pitfalls
- •
Letting the agent reason over raw PDFs directly
- •Extract text first. OCR noise will produce bad dispositions if you feed unstructured documents straight into the model.
- •
Using one giant agent for every step
- •Split intake validation from coverage reasoning. Smaller tasks are easier to test and easier to defend in audits.
- •
Skipping jurisdiction-specific rules
- •Claims handling varies by state or country. Encode local requirements outside the model in deterministic services so legal changes do not depend on prompt edits.
- •
Not versioning prompts and task outputs
- •If a regulator asks why a claim was routed differently last month than today, you need exact prompt/task versions tied to each decision.
The right pattern is simple: deterministic validation at the edges, CrewAI for orchestration in the middle, human approval where risk is high. That gives you a claims agent that is useful in production without turning your claims operation into an opaque black box.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit