How to Build a underwriting Agent Using LangGraph in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
underwritinglanggraphtypescripthealthcare

A healthcare underwriting agent reviews patient, plan, and policy inputs, then produces a risk assessment, eligibility recommendation, and a traceable rationale for human review. It matters because underwriting in healthcare is not just classification work; it directly affects access, pricing, compliance, and auditability.

Architecture

  • Input normalization layer
    • Converts raw intake data from claims, EHR summaries, member forms, and prior authorization notes into a typed underwriting payload.
  • Policy rules node
    • Applies deterministic checks first: age bands, waiting periods, exclusions, pre-existing condition constraints, and jurisdiction-specific rules.
  • LLM reasoning node
    • Summarizes medical and coverage context into a structured recommendation with explicit uncertainty.
  • Risk scoring node
    • Produces a numeric score or tier that downstream systems can consume.
  • Audit and explanation store
    • Persists every decision path, prompt version, model version, and rule outcome for compliance review.
  • Human review gate
    • Routes borderline or high-risk cases to an underwriter before any final decision is emitted.

Implementation

1. Define the state shape for underwriting

Use a typed state so every node knows exactly what it can read and write. In healthcare, this matters because you want deterministic handling of PHI-bearing fields and a clean audit trail.

import { Annotation } from "@langchain/langgraph";

export type UnderwritingState = {
  applicantId: string;
  age: number;
  diagnosisCodes: string[];
  medications: string[];
  coverageType: "individual" | "family" | "group";
  jurisdiction: string;

  policyFlags?: string[];
  riskScore?: number;
  recommendation?: "approve" | "review" | "decline";
  rationale?: string;
  auditTrail?: Array<{ step: string; detail: string }>;
};

export const StateAnnotation = Annotation.Root({
  applicantId: Annotation<string>(),
  age: Annotation<number>(),
  diagnosisCodes: Annotation<string[]>(),
  medications: Annotation<string[]>(),
  coverageType: Annotation<"individual" | "family" | "group">(),
  jurisdiction: Annotation<string>(),

  policyFlags: Annotation<string[]>({
    default: () => [],
    reducer: (left, right) => [...left, ...right],
  }),
  riskScore: Annotation<number>(),
  recommendation: Annotation<"approve" | "review" | "decline">(),
  rationale: Annotation<string>(),
  auditTrail: Annotation<Array<{ step: string; detail: string }>>({
    default: () => [],
    reducer: (left, right) => [...left, ...right],
  }),
});

2. Add deterministic policy checks before any LLM call

Do not let the model decide obvious policy violations. Use a normal node first so the graph can short-circuit unsafe or non-compliant cases.

import { StateGraph, START, END } from "@langchain/langgraph";

const policyCheck = async (state: typeof StateAnnotation.State) => {
  const flags: string[] = [];
  if (state.age < 18 && state.coverageType === "individual") {
    flags.push("minor-individual-plan-review");
  }
  if (state.jurisdiction === "CA" && state.diagnosisCodes.includes("F32")) {
    flags.push("mental-health-sensitive-jurisdiction");
  }

  return {
    policyFlags: flags,
    auditTrail: [
      {
        step: "policyCheck",
        detail: `flags=${flags.join(",") || "none"}`,
      },
    ],
    recommendation:
      flags.length > 0 ? ("review" as const) : ("approve" as const),
  };
};

3. Use an LLM node for structured underwriting rationale

Keep the model on a short leash. It should summarize risk factors and explain the recommendation, not invent policy.

import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: low,
});

const UnderwritingOutput = z.object({
riskScore:
z.number().min(0).max(100),
recommendation:
z.enum(["approve", "review", "decline"]),
rationale:
z.string(),
});

const llmUnderwrite = async (state:
typeof StateAnnotation.State) => {
const prompt =
`You are assisting healthcare underwriting.
Use only the provided facts.
Return a concise rationale with no PHI beyond what is necessary.

Age:
${state.age}

Diagnosis codes:
${state.diagnosisCodes.join(", ")}

Medications:
${state.medications.join(", ")}

Policy flags:
${state.policyFlags?.join(", ") || "none"}
`;

const result = await llm.withStructuredOutput(UnderwritingOutput).invoke(prompt);

return {
riskScore:
result.riskScore,
recommendation:
result.recommendation,
rationale:
result.rationale,
auditTrail:
[
{
step:
"llmUnderwrite",
detail:
`riskScore=${result.riskScore}, recommendation=${result.recommendation}`,
},
],
};
};

4. Wire the graph and add a human review gate

Use StateGraph to compose the workflow. The important pattern here is that borderline cases go to review instead of auto-decisioning.

const needsReview = (state: typeof StateAnnotation.State) =>
state.recommendation === "review" || (state.riskScore ??0) >=75;

const graph = new StateGraph(StateAnnotation)
.addNode("policyCheck", policyCheck)
.addNode("llmUnderwrite", llmUnderwrite)
.addEdge(START,"policyCheck")
.addConditionalEdges("policyCheck",(state)=>
state.policyFlags?.length ? END : "llmUnderwrite")
.addConditionalEdges("llmUnderwrite",(state)=>
needsReview(state)? END : END)
.compile();

export async function runUnderwriting(input:{
applicantId:string;
age:number;
diagnosisCodes:string[];
medications:string[];
coverageType:"individual"|"family"|"group";
jurisdiction:string;
}) {
return graph.invoke({
...input,
auditTrail:[{step:"input",detail:"received"}],
});
}

Production Considerations

  • Compliance by design
    • Store prompt versions, model versions, rule versions, and output hashes in an immutable audit log.
  • Data residency
    • Keep PHI processing in-region. If your healthcare data cannot leave a specific geography, pin model endpoints and vector stores to that region.
  • Guardrails
    • Redact identifiers before LLM calls unless they are strictly required.
    • Block unsupported outputs like diagnosis inference or treatment advice.
  • Monitoring
    • Track override rates, review rates, false declines, latency per node, and policy flag frequency by jurisdiction.

Common Pitfalls

  • Letting the model make policy decisions

    Don’t ask the LLM to interpret plan rules from scratch. Put hard eligibility logic in code first and use the model only for synthesis.

  • Sending full PHI into prompts

    Minimize input fields. Replace member names with internal IDs and strip anything not needed for underwriting.

  • Skipping audit metadata

    If you cannot reconstruct why a case was approved or declined six months later, your workflow is not production-ready for healthcare.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides