How to Build a fraud detection Agent Using LangGraph in TypeScript for insurance
A fraud detection agent for insurance triages claims, policy changes, and supporting documents to decide whether a case should be auto-approved, routed to SIU, or escalated for human review. It matters because fraud leakage hits loss ratios directly, while false positives create customer friction, regulatory risk, and unnecessary adjuster workload.
Architecture
- •
Claim intake node
- •Normalizes incoming claim payloads from FNOL, document uploads, or API events.
- •Extracts the minimum fields needed for downstream checks.
- •
Rules and signal enrichment node
- •Runs deterministic checks first: policy active, claim date within coverage window, duplicate claim IDs, suspicious timing.
- •Pulls external signals like prior claims history or device/IP metadata when allowed by policy.
- •
LLM assessment node
- •Uses a structured prompt to summarize red flags and produce a fraud risk score with reasons.
- •Keeps the model on a short leash by only letting it classify based on evidence already gathered.
- •
Decision router
- •Routes to
approve,review, orsiubased on thresholds and business rules. - •Applies insurer-specific overrides like mandatory review for high-value claims.
- •Routes to
- •
Audit logger
- •Persists every decision input, output, and branch taken.
- •Supports compliance reviews, adverse action explainability, and model governance.
Implementation
1) Define the graph state and typed outputs
You want strong types at the boundary. In insurance workflows, weakly typed state becomes a compliance problem fast because you cannot reliably reconstruct why a claim was escalated.
import { z } from "zod";
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
const FraudAssessmentSchema = z.object({
riskScore: z.number().min(0).max(100),
decision: z.enum(["approve", "review", "siu"]),
reasons: z.array(z.string()).min(1),
});
type Claim = {
claimId: string;
policyId: string;
claimantName: string;
lossDate: string;
reportedDate: string;
amount: number;
jurisdiction: string;
};
const GraphState = Annotation.Root({
claim: Annotation<Claim>(),
signals: Annotation<Record<string, unknown>>(),
assessment: Annotation<z.infer<typeof FraudAssessmentSchema> | null>(),
});
2) Build deterministic enrichment before calling the model
For insurance fraud detection, do not ask the model to infer basic facts that your systems already know. Put policy validation and rule checks in code so the LLM only reasons over verified evidence.
import { ChatOpenAI } from "@langchain/openai";
import { RunnableLambda } from "@langchain/core/runnables";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const enrichSignals = async (state: typeof GraphState.State) => {
const { claim } = state;
const daysBetween =
(new Date(claim.reportedDate).getTime() - new Date(claim.lossDate).getTime()) /
(1000 * 60 * 60 * 24);
return {
signals: {
lateReportedClaim: daysBetween > 14,
highValueClaim: claim.amount >= 25000,
jurisdictionFlagged: ["NY", "FL", "CA"].includes(claim.jurisdiction),
duplicatePatternScore: await fakeDuplicateCheck(claim.claimId),
},
};
};
async function fakeDuplicateCheck(claimId: string) {
return claimId.endsWith("9") ? 82 : 12;
}
3) Add an LLM assessment node with structured output
Use withStructuredOutput() so the model returns machine-parseable results. This is the pattern you want when downstream routing affects claims handling and audit trails.
const assessFraud = async (state: typeof GraphState.State) => {
const prompt = [
{
role: "system" as const,
content:
"You are an insurance fraud triage assistant. Use only the provided claim and signal data. Return a concise assessment.",
},
{
role: "user" as const,
content: JSON.stringify({
claim: state.claim,
signals: state.signals,
instructions:
"Score fraud risk from 0 to 100. Decision must be approve, review, or siu.",
}),
},
];
const structuredModel = llm.withStructuredOutput(FraudAssessmentSchema);
const result = await structuredModel.invoke(prompt);
return { assessment: result };
};
4) Route decisions and compile the graph
This is where LangGraph earns its keep. You get explicit branching instead of burying logic inside one giant prompt.
const routeDecision = (state: typeof GraphState.State) => {
if (!state.assessment) return "review";
return state.assessment.decision;
};
const logAudit = async (state: typeof GraphState.State) => {
console.log(
JSON.stringify({
claimId: state.claim.claimId,
signals: state.signals,
assessment: state.assessment,
timestamp: new Date().toISOString(),
}))
return {};
};
const graph = new StateGraph(GraphState)
.addNode("enrichSignals", enrichSignals)
.addNode("assessFraud", assessFraud)
.addNode("logAudit", logAudit)
.addEdge(START, "enrichSignals")
.addEdge("enrichSignals", "assessFraud")
.addEdge("assessFraud", "logAudit")
.addConditionalEdges("logAudit", routeDecision, {
approve": END,
review": END,
siu": END,
})
.compile();
const result = await graph.invoke({
claim: {
claimId: "CLM-10009",
policyId: "POL-7781",
claimantName: "Jordan Lee",
lossDate:"2026-01-02",
reportedDate:"2026-01-20",
amount":32000,
jurisdiction:"FL",
},
signals:{},
assessment:null,
});
console.log(result);
Production Considerations
- •
Keep PII out of prompts unless necessary
Mask names, addresses, phone numbers, and payment details before model calls. For insurance workloads, that reduces privacy exposure and simplifies data retention controls.
- •
Pin data residency by environment
If your insurer operates across regions, keep EU claims in EU-hosted infrastructure and avoid cross-border prompt logging. This matters for GDPR-style constraints and internal residency policies.
- •
Persist full audit traces
Store input payload hashes, node outputs, routing decisions, model version, and timestamps. Claims teams need traceability when a customer disputes why a file was escalated.
- •
Add hard thresholds outside the model
High-value claims or certain jurisdictions should trigger mandatory human review regardless of score. Do not let an LLM override underwriting or SIU policy rules.
Common Pitfalls
- •
Letting the model make all decisions
If you ask the LLM to both detect fraud and decide disposition without deterministic checks, you get inconsistent behavior. Split verification from interpretation.
- •
No structured output validation
Free-form text is not acceptable for production claims workflows. Always validate against a schema like
zodbefore routing anything to SIU or auto-denial logic. - •
Weak audit logging
Logging only the final decision is not enough. Capture intermediate signals and graph branches so compliance can reproduce the path that led to escalation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit