How to Build a claims processing Agent Using LangGraph in TypeScript for insurance
A claims processing agent takes a first notice of loss, gathers the missing facts, validates policy coverage, checks for fraud signals, and routes the claim to the right next step. For insurance teams, that matters because most claim handling time is spent on repetitive triage, not actual adjudication, and every manual handoff adds cost, delay, and compliance risk.
Architecture
- •
Ingress layer
- •Accepts claim intake from web forms, email extraction, or a case management API.
- •Normalizes payloads into a single
ClaimContextshape.
- •
State machine
- •Uses LangGraph to model the claim lifecycle as explicit nodes and conditional edges.
- •Keeps every decision auditable.
- •
Policy and document retrieval
- •Pulls policy details, endorsements, exclusions, and prior correspondence from approved systems.
- •Prevents the model from guessing coverage.
- •
Decision engine
- •Separates low-risk straight-through processing from cases needing adjuster review.
- •Applies deterministic rules before any LLM-driven step.
- •
Human escalation path
- •Routes incomplete, ambiguous, or high-severity claims to a human claims handler.
- •Preserves approval authority for regulated decisions.
- •
Audit and telemetry
- •Logs every node transition, retrieved artifact, and final recommendation.
- •Supports compliance review and dispute resolution.
Implementation
- •Define the graph state and build typed nodes
For insurance workflows, keep state explicit. Do not pass raw chat history around and hope the model infers what matters.
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
type ClaimStatus = "new" | "needs_info" | "coverage_check" | "fraud_review" | "ready_for_adjuster" | "complete";
const ClaimState = Annotation.Root({
claimId: Annotation<string>(),
policyNumber: Annotation<string>(),
claimantName: Annotation<string>(),
lossDate: Annotation<string>(),
lossType: Annotation<string>(),
description: Annotation<string>(),
documents: Annotation<string[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
status: Annotation<ClaimStatus>(),
coverageDecision: Annotation<string | null>({
default: () => null,
reducer: (_, right) => right,
}),
fraudScore: Annotation<number>({
default: () => 0,
reducer: (_, right) => right,
}),
});
type ClaimContext = typeof ClaimState.State;
- •Implement deterministic checks before model calls
Use rules for completeness and obvious routing. In insurance operations, this reduces unnecessary LLM usage and keeps PII exposure smaller.
const validateClaim = async (state: ClaimContext): Promise<Partial<ClaimContext>> => {
const required = [state.claimId, state.policyNumber, state.lossDate, state.lossType];
const missing = required.some((v) => !v || v.trim().length === 0);
if (missing) {
return { status: "needs_info" };
}
return { status: "coverage_check" };
};
const checkCoverage = async (state: ClaimContext): Promise<Partial<ClaimContext>> => {
// Replace with policy admin lookup + rules engine call.
const coveredLossTypes = ["collision", "theft", "water_damage"];
const isCovered = coveredLossTypes.includes(state.lossType.toLowerCase());
return {
coverageDecision: isCovered ? "covered" : "excluded",
status: isCovered ? "fraud_review" : "ready_for_adjuster",
};
};
const fraudScreen = async (state: ClaimContext): Promise<Partial<ClaimContext>> => {
// Replace with your SIU scoring service.
const score = state.lossType.toLowerCase() === "theft" ? 0.72 : 0.18;
return {
fraudScore: score,
status: score > 0.6 ? "ready_for_adjuster" : "complete",
};
};
- •Assemble the LangGraph workflow with conditional routing
This is the part that makes LangGraph useful here. The graph makes each transition inspectable instead of hiding orchestration inside one giant prompt.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});
const enrichClaim = async (state: ClaimContext): Promise<Partial<ClaimContext>> => {
const prompt = `
You are a claims intake assistant.
Extract a concise summary from this claim description for an adjuster.
Return only the summary text.
Description:
${state.description}
`;
const response = await llm.invoke(prompt);
return {
description: String(response.content),
};
};
const routeByStatus = (state: ClaimContext) => {
switch (state.status) {
case "needs_info":
return END;
case "coverage_check":
return "checkCoverage";
case "fraud_review":
return "fraudScreen";
case "ready_for_adjuster":
return END;
case "complete":
return END;
default:
return END;
}
};
const graph = new StateGraph(ClaimState)
.addNode("validateClaim", validateClaim)
.addNode("enrichClaim", enrichClaim)
.addNode("checkCoverage", checkCoverage)
.addNode("fraudScreen", fraudScreen)
.addEdge(START, "validateClaim")
.addEdge("validateClaim", "enrichClaim")
.addConditionalEdges("enrichClaim", routeByStatus)
.addConditionalEdges("checkCoverage", routeByStatus)
.addConditionalEdges("fraudScreen", routeByStatus);
const app = graph.compile();
- •Run it with a real claim payload and persist the result
In production you would attach persistence in your own storage layer or use LangGraph checkpointing patterns. The important part is that every run produces a traceable state transition.
async function main() {
const result = await app.invoke({
claimId: "CLM-100045",
policyNumber: "POL-778812",
claimantName: "A. Ndlovu",
lossDate: "2026-04-10",
lossType: "theft",
description: "Vehicle was stolen overnight from a locked garage.",
documents: ["police_report.pdf"],
status: "new",
});
console.log(result);
}
main();
Production Considerations
- •
Keep PII inside controlled boundaries
- •Redact sensitive fields before sending text to the model where possible.
- •For medical or life claims, treat diagnosis data as restricted data with stricter access controls.
- •
Enforce data residency
- •Route EU or jurisdiction-specific claims to region-bound infrastructure.
- •Store audit logs and checkpoints in the same residency boundary as the source claim when required by policy.
- •
Add guardrails around coverage language
- •Never let the model issue final coverage determinations without deterministic rules or human approval.
- •Use prompts only for summarization, classification support, or document extraction.
- •
Instrument every node
- •Log node name, input hash, output hash, latency, and decision reason.
- •Claims teams need traceability when regulators ask why a file was routed or delayed.
Common Pitfalls
- •
Using one big prompt for everything
- •This turns intake, validation, coverage analysis, and escalation into an opaque blob.
- •Split them into separate nodes so you can test each step independently.
- •
Letting the LLM decide coverage directly
- •That creates compliance risk because models are not policy engines.
- •Put coverage logic in code or a rules service; use the LLM only for extraction and summarization.
- •
Ignoring incomplete evidence
- •Claims often arrive without police reports, photos, invoices, or repair estimates.
- •Model this explicitly with a
needs_infobranch instead of forcing a best-effort answer.
- •
Skipping auditability
- •If you cannot reconstruct why a claim moved to SIU review or adjuster handling, you will fail internal review fast.
- •Persist graph state transitions and keep prompt/version metadata with each run.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit