How to Build a underwriting Agent Using CrewAI in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
underwritingcrewaitypescripthealthcare

A healthcare underwriting agent reviews patient, provider, and policy data to decide whether a case should be approved, routed for manual review, or flagged for missing risk information. It matters because underwriting in healthcare touches eligibility, compliance, cost exposure, and member access, so the agent has to be accurate, auditable, and conservative.

Architecture

  • Input normalizer
    • Converts raw intake from EHR exports, claims summaries, or broker forms into a consistent underwriting payload.
  • Risk analysis agent
    • Evaluates clinical risk signals, coverage constraints, and missing documentation.
  • Compliance checker
    • Applies healthcare rules like PHI handling, minimum necessary access, audit logging, and jurisdiction constraints.
  • Decision composer
    • Produces a structured recommendation: approve, deny, refer to human review, or request more data.
  • Audit sink
    • Stores prompts, tool outputs, and final decisions for traceability.
  • Policy/rules store
    • Keeps underwriting rules outside the model so changes do not require prompt rewrites.

Implementation

1) Install the TypeScript stack

Use the CrewAI TypeScript package plus a schema validator for strict output handling.

npm install @crewai/crewai zod dotenv

Set your environment variables:

CREWAI_API_KEY=your_key
OPENAI_API_KEY=your_llm_key

2) Define the underwriting output contract

Healthcare underwriting needs a deterministic shape. Do not let the model free-write decisions.

import { z } from "zod";

export const UnderwritingDecisionSchema = z.object({
  decision: z.enum(["approve", "manual_review", "request_more_info", "deny"]),
  riskScore: z.number().min(0).max(100),
  rationale: z.array(z.string()).min(1),
  complianceFlags: z.array(z.string()),
  missingDocuments: z.array(z.string()),
});

export type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;

3) Build agents and tasks with CrewAI

This pattern keeps clinical risk analysis separate from compliance review. That separation matters because one prompt should not be responsible for both medical judgment and regulatory interpretation.

import "dotenv/config";
import { Agent, Task, Crew } from "@crewai/crewai";
import { UnderwritingDecisionSchema } from "./schema.js";

const riskAnalyst = new Agent({
  role: "Healthcare Risk Analyst",
  goal: "Assess underwriting risk using only the provided case data and policy rules.",
  backstory:
    "You analyze healthcare underwriting cases with strict attention to completeness, consistency, and documented evidence.",
});

const complianceReviewer = new Agent({
  role: "Healthcare Compliance Reviewer",
  goal: "Check the case for PHI handling issues, residency constraints, and policy-rule violations.",
  backstory:
    "You enforce healthcare compliance boundaries and ensure every decision is auditable.",
});

const analyzeRiskTask = new Task({
  description: `
Review this underwriting case:
- age: {{age}}
- diagnosis_summary: {{diagnosis_summary}}
- prior_claims: {{prior_claims}}
- requested_coverage: {{requested_coverage}}
- state: {{state}}

Return a concise risk assessment with a numeric score from 0 to 100.
Do not make final approval decisions.
`,
  expectedOutput: "A structured risk assessment with rationale and missing information.",
  agent: riskAnalyst,
});

const complianceTask = new Task({
  description: `
Review the same case for healthcare compliance concerns:
- confirm minimum necessary data usage
- flag PHI exposure risks
- flag data residency concerns if state-specific processing applies
- identify any missing documents required for auditability

Return flags only from the supplied case data.
`,
  expectedOutput: "A structured compliance review with flags and missing documents.",
  agent: complianceReviewer,
});

const crew = new Crew({
  agents: [riskAnalyst, complianceReviewer],
  tasks: [analyzeRiskTask, complianceTask],
});

4) Execute the crew and validate the result

The key production move is validating the model output before you persist or act on it. If validation fails, route to manual review.

async function runUnderwritingCase() {
  const result = await crew.kickoff({
    inputs: {
      age: 54,
      diagnosis_summary: "Type 2 diabetes with hypertension",
      prior_claims: ["ER visit in last 12 months", "Medication adherence gap"],
      requested_coverage: "Major medical plan",
      state: "CA",
    },
    response_format: UnderwritingDecisionSchema,
  });

  const decision = UnderwritingDecisionSchema.parse(result);

  console.log(JSON.stringify(decision, null, 2));
}

runUnderwritingCase().catch((err) => {
  console.error("Underwriting workflow failed:", err);
});

That response_format plus schema parse gives you a hard boundary. In healthcare workflows, soft boundaries are how bad decisions get shipped.

Production Considerations

  • Deploy in-region
    • Keep inference and logs in the same jurisdiction as your covered entity requirements. If your org has state-level residency constraints, do not send PHI across regions just because it is cheaper.
  • Audit every step
    • Persist task inputs, tool calls, outputs, timestamps, model version, and final disposition. Underwriting decisions need traceability for internal review and external audits.
  • Add PHI guardrails
    • Redact unnecessary identifiers before calling the agent. Use minimum necessary access by default; do not feed full charts when summary fields are enough.
  • Human-in-the-loop thresholds
    • Auto-approve only low-risk cases with complete data. Anything ambiguous should route to a licensed reviewer or operations analyst.

Common Pitfalls

  1. Letting the model make final decisions without rules

    • Fix this by keeping explicit policy thresholds outside the prompt. The agent should recommend; your rules engine should decide when automation is allowed.
  2. Passing raw clinical records into every task

    • Fix this by preprocessing into a narrow case summary. The smaller the input surface area, the lower your PHI exposure and hallucination risk.
  3. Skipping structured validation

    • Fix this by enforcing Zod schemas on every response. If the output cannot be parsed cleanly into approve | manual_review | request_more_info | deny, do not execute downstream actions.
  4. Ignoring audit metadata

    • Fix this by storing model name, prompt version, task outputs, and reviewer overrides. In healthcare underwriting, “why did it decide this?” is not optional.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides