How to Build a underwriting Agent Using LangGraph in TypeScript for insurance
An underwriting agent automates the first pass on insurance submissions: it reads the application, checks appetite rules, scores risk, flags missing data, and routes borderline cases to a human underwriter. For insurers, that matters because it cuts turnaround time without removing the control points you need for compliance, auditability, and consistent risk selection.
Architecture
- •Submission intake
- •Accepts structured application data plus attachments like PDFs, ACORD forms, loss runs, and broker notes.
- •Document extraction
- •Pulls key fields from unstructured documents before the LLM sees anything.
- •Risk evaluation node
- •Uses an LLM to classify risk against underwriting guidelines and produce a structured decision payload.
- •Rules and compliance gate
- •Enforces hard constraints: appetite rules, excluded classes, jurisdiction checks, sanctions flags, and missing mandatory fields.
- •Human review router
- •Sends uncertain, high-value, or out-of-policy submissions to a licensed underwriter.
- •Audit trail storage
- •Persists every input, intermediate state, decision reason, and final outcome for model governance and regulatory review.
Implementation
1) Define the graph state and typed outputs
Keep the graph state narrow. Underwriting agents fail when you dump raw PDFs and free-form chat history into every node.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const UnderwritingDecisionSchema = z.object({
decision: z.enum(["approve", "refer", "decline"]),
riskScore: z.number().min(0).max(100),
reasons: z.array(z.string()),
missingFields: z.array(z.string()),
});
type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;
const GraphState = Annotation.Root({
submission: Annotation<any>(),
extractedText: Annotation<string>(),
decision: Annotation<UnderwritingDecision | null>(),
auditLog: Annotation<string[]>(),
});
This gives you typed state transitions and a structured output contract. In insurance workflows, that structure is what makes downstream audit logging and reviewer handoff reliable.
2) Add extraction and underwriting nodes
Use one node to normalize source data, then another to make the underwriting call. Keep prompts explicit about policy constraints.
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const extractNode = async (state: typeof GraphState.State) => {
const text = [
state.submission.applicantName,
state.submission.businessDescription,
state.submission.lossRunsText ?? "",
state.submission.brokerNotes ?? "",
].join("\n");
return {
extractedText: text,
auditLog: [...state.auditLog, "Extracted submission text"],
};
};
const underwriteNode = async (state: typeof GraphState.State) => {
const prompt = `
You are an insurance underwriting assistant.
Apply these rules:
- If required fields are missing, mark refer.
- If class of business is excluded, mark decline.
- If risk is outside appetite but not excluded, mark refer.
Return JSON only.
Submission:
${state.extractedText}
`;
const result = await llm.withStructuredOutput(UnderwritingDecisionSchema).invoke(prompt);
return {
decision: result,
auditLog: [...state.auditLog, `Underwritten with decision=${result.decision}`],
};
};
This pattern works because the model is constrained to a schema. That matters in regulated insurance settings where you need deterministic fields for reasons codes, triage queues, and BI reporting.
3) Add policy routing with addConditionalEdges
A real underwriting agent should not let the model decide everything. Hard rules belong in code.
const routeByDecision = (state: typeof GraphState.State) => {
const d = state.decision;
if (!d) return "refer";
if (d.decision === "approve") return END;
if (d.decision === "decline") return END;
return "refer";
};
const referNode = async (state: typeof GraphState.State) => {
return {
auditLog: [...state.auditLog, "Routed to human underwriter"],
};
};
const graph = new StateGraph(GraphState)
.addNode("extract", extractNode)
.addNode("underwrite", underwriteNode)
.addNode("refer", referNode)
.addEdge(START, "extract")
.addEdge("extract", "underwrite")
.addConditionalEdges("underwrite", routeByDecision, {
refer: "refer",
[END]: END,
})
.addEdge("refer", END)
.compile();
This is the right split for production insurance systems:
- •LLM handles classification and explanation
- •Code handles policy enforcement
- •Humans handle exceptions
4) Invoke the graph with submission data
Keep invocation simple and log every run with a correlation ID.
const result = await graph.invoke({
submission: {
applicantName: "Northwind Logistics LLC",
businessDescription: "Regional freight forwarding",
lossRunsText: "2 claims in last 36 months",
brokerNotes: "Expansion into refrigerated transport",
jurisdiction: "CA",
classOfBusiness: "logistics",
annualRevenue: 12000000,
},
extractedText: "",
decision: null,
auditLog: [],
});
console.log(result.decision);
console.log(result.auditLog);
In a real deployment, store result.auditLog, the prompt version, model version, and submission metadata in your audit store. That gives you traceability when compliance asks why a case was referred or declined.
Production Considerations
- •Data residency
- •Keep submissions in-region if your policyholders require it. If you operate across jurisdictions like EU/UK/US states, route data to approved model endpoints only.
- •Auditability
- •Persist input payloads, extracted fields, decision JSON, prompt template versioning, and reviewer overrides. Insurance audits are about reconstructing the path to a decision.
- •Guardrails
- •Block excluded classes before model invocation. Don’t ask an LLM to “remember” appetite rules that should be enforced by code or config.
- •Monitoring
- •Track referral rate, decline rate, human override rate, latency per node, and schema validation failures. A sudden shift usually means prompt drift or upstream data quality issues.
Common Pitfalls
- •
Letting the model make final binding decisions
- •Avoid this by using hard-coded eligibility rules outside the LLM and reserving final authority for human underwriters on referrals.
- •
Passing raw documents through every node
- •Don’t keep full PDFs in graph state unless you need them. Extract once, store references separately, and move compact normalized fields through the graph.
- •
Skipping structured outputs
- •Free-form text breaks downstream systems. Use
withStructuredOutput()plus Zod so your decision object always containsdecision,riskScore,reasons, andmissingFields.
- •Free-form text breaks downstream systems. Use
If you build it this way, LangGraph gives you orchestration without turning underwriting into an opaque chatbot. That’s the right shape for insurance: controlled automation with clear escalation paths and an audit trail that stands up in review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit