How to Build a underwriting Agent Using CrewAI in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
underwritingcrewaitypescriptinsurance

An underwriting agent reviews an insurance submission, pulls the right policy context, checks risk signals, and produces a decision package for a human underwriter. For insurance teams, that matters because it cuts manual triage time, improves consistency, and creates an auditable trail for why a submission was accepted, referred, or declined.

Architecture

A production underwriting agent in CrewAI needs these components:

  • Submission intake

    • Receives applicant data, broker notes, loss history, and attachments.
    • Normalizes input into a structured underwriting case.
  • Policy and appetite retrieval

    • Pulls underwriting guidelines, product rules, and appetite constraints from approved sources.
    • Keeps the agent grounded in current carrier rules.
  • Risk analysis agent

    • Evaluates exposure, prior claims, geography, industry class, and other rating factors.
    • Produces a structured risk summary with rationale.
  • Compliance and audit layer

    • Checks outputs against regulatory constraints and internal referral rules.
    • Stores prompts, tool calls, and final recommendations for audit.
  • Decision orchestration

    • Combines analysis from multiple agents into one recommendation.
    • Routes borderline cases to human review.

Implementation

1) Install CrewAI for TypeScript and define your domain models

Use the TypeScript SDK and keep your underwriting case shape explicit. Insurance systems break when everything is any.

npm install @crew-ai/crewai zod
// types.ts
export type UnderwritingCase = {
  caseId: string;
  lineOfBusiness: "property" | "casualty" | "cyber";
  insuredName: string;
  industryCode: string;
  annualRevenue: number;
  locationCountry: string;
  priorClaims: number;
  requestedLimit: number;
};

export type UnderwritingDecision = {
  recommendation: "approve" | "refer" | "decline";
  rationale: string[];
  referralReasons?: string[];
};

2) Create agents with clear roles

CrewAI works best when each agent has one job. Don’t make one giant “insurance brain” agent; split risk analysis from compliance review.

// crew.ts
import { Agent, Task, Crew } from "@crew-ai/crewai";
import { z } from "zod";
import type { UnderwritingCase } from "./types";

const caseSchema = z.object({
  caseId: z.string(),
  lineOfBusiness: z.enum(["property", "casualty", "cyber"]),
  insuredName: z.string(),
  industryCode: z.string(),
  annualRevenue: z.number(),
  locationCountry: z.string(),
  priorClaims: z.number(),
  requestedLimit: z.number(),
});

export const riskAgent = new Agent({
  role: "Senior Underwriting Analyst",
  goal:
    "Assess submission risk using underwriting guidelines and produce a concise recommendation.",
  backstory:
    "You are an insurance underwriter focused on consistency, referral discipline, and explainable decisions.",
});

export const complianceAgent = new Agent({
  role: "Underwriting Compliance Reviewer",
  goal:
    "Check the recommendation for regulatory issues, missing referrals, and prohibited language.",
});

3) Define tasks that force structured output

The key pattern is to make the first task analyze risk and the second task validate compliance. That gives you a clean handoff before anything reaches a human underwriter or policy admin system.

const analyzeTask = new Task({
  description: `
Review this underwriting submission and produce:
1. Risk assessment
2. Recommendation: approve, refer, or decline
3. Reason codes tied to underwriting facts only

Submission:
{submission}
`,
  expectedOutput:
    "A structured underwriting recommendation with bullet-point rationale.",
  agent: riskAgent,
});

const complianceTask = new Task({
  description: `
Review the prior underwriting recommendation for:
- Compliance with internal referral rules
- Avoidance of discriminatory or unsupported language
- Need for human review based on jurisdiction or appetite

Submission:
{submission}
Recommendation:
{analysis}
`,
  expectedOutput:
    "A compliance review stating whether the case can proceed or must be referred.",
    agent: complianceAgent,
});

4) Run the crew and return an auditable decision package

This is where you wire in your actual application data. In production, persist both task outputs so claims teams or auditors can reconstruct the path later.

// index.ts
import { Crew } from "@crew-ai/crewai";
import { caseSchema } from "./crew";
import { analyzeTask, complianceTask } from "./tasks";
import type { UnderwritingCase } from "./types";

export async function underwrite(submissionInput: unknown) {
  const submission = caseSchema.parse(submissionInput) as UnderwritingCase;

  const crew = new Crew({
    agents: [analyzeTask.agent!, complianceTask.agent!],
    tasks: [analyzeTask, complianceTask],
    verbose: true,
    memory: false,
    process: "sequential",
    inputs: {
      submission,
      analysis: "",
    },
    callbackManager: {
      onTaskComplete(taskResult) {
        console.log("task_complete", taskResult);
      },
    },
  });

  const result = await crew.kickoff();
  
export async function runExample() {
    return {
      caseId: submission.caseId,
      output: result,
    };
}

runExample().catch(console.error);
}

Production Considerations

  • Auditability
    • Persist raw inputs, model outputs, task traces, tool calls, timestamps, and final decision codes.
  • Data residency
    • Keep PII and submissions in-region if your carrier operates under local residency requirements.
  • Guardrails
    • Block unsupported attributes like race proxies or protected-class inference.
  • Human-in-the-loop
    • Auto-refer borderline cases above configured thresholds instead of forcing an LLM decision.

Common Pitfalls

  • Using free-form prompts for decisions

    If the output isn’t structured, downstream systems will misread it. Use schemas and fixed decision labels like approve, refer, and decline.

  • Skipping referral logic

    A model can sound confident while violating appetite rules. Encode mandatory referrals for high limits, adverse loss history, regulated geographies, or restricted industries.

  • Ignoring audit requirements

    Insurance decisions need traceability. Store prompt versions, task outputs, source documents used in reasoning, and who approved the final action.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides