How to Build a fraud detection Agent Using LangChain in TypeScript for insurance

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlangchaintypescriptinsurance

A fraud detection agent for insurance takes claim data, policy context, prior loss history, and supporting documents, then scores the claim for suspicious patterns and explains why it was flagged. It matters because fraud leaks margin fast, slows legitimate claims, and creates compliance pressure when decisions are inconsistent or impossible to audit.

Architecture

  • Input adapter
    • Normalizes claim payloads from FNOL, policy admin systems, document stores, and adjuster notes.
  • Risk signal extractor
    • Pulls structured signals like claim amount, loss timing, repair estimates, prior claims, and entity overlap.
  • LLM reasoning layer
    • Uses LangChain to summarize evidence, classify fraud indicators, and produce a rationale with citations.
  • Rules + thresholds
    • Hard checks for obvious cases: duplicate bank account, repeated phone number across claims, high-value claim shortly after policy inception.
  • Case output writer
    • Stores the score, explanation, and evidence trail in a case management system for audit.
  • Human review handoff
    • Routes medium/high-risk claims to SIU or adjusters instead of auto-denying them.

Implementation

1) Install dependencies and define the claim schema

Use LangChain’s TypeScript packages with a strict schema so the agent does not freewheel over raw text. In insurance workflows, the input contract matters more than the prompt.

npm install langchain @langchain/openai zod
import { z } from "zod";

export const ClaimSchema = z.object({
  claimId: z.string(),
  policyId: z.string(),
  claimantName: z.string(),
  lossDate: z.string(), // ISO date
  reportDate: z.string(), // ISO date
  claimAmount: z.number(),
  incidentType: z.enum(["auto", "property", "health", "life", "liability"]),
  priorClaims12M: z.number(),
  daysSincePolicyInception: z.number(),
  notes: z.string(),
});

export type ClaimInput = z.infer<typeof ClaimSchema>;

2) Build a prompt that forces structured fraud analysis

Use ChatPromptTemplate and StructuredOutputParser so the model returns a stable JSON object you can store and audit. This is the pattern you want when legal or SIU asks why a case was flagged.

import { ChatOpenAI } from "@langchain/openai";
import {
  ChatPromptTemplate,
} from "@langchain/core/prompts";
import {
  StructuredOutputParser,
} from "@langchain/core/output_parsers";
import { z } from "zod";

const FraudAssessmentSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  score: z.number().min(0).max(100),
  reasons: z.array(z.string()).min(3).max(5),
  recommendedAction: z.enum(["auto_process", "manual_review", "siu_escalation"]),
});

const parser = StructuredOutputParser.fromZodSchema(FraudAssessmentSchema);

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    [
      "You are an insurance fraud triage assistant.",
      "Assess fraud risk from claim facts only.",
      "Do not deny claims. Recommend review actions.",
      "Return only structured output.",
      parser.getFormatInstructions(),
    ].join("\n"),
  ],
  [
    "human",
    `Claim data:
{claimJson}

Insurance fraud signals to consider:
- short time between policy inception and loss
- high severity relative to coverage context
- repeated claims in last 12 months
- inconsistent narrative in notes
- suspicious timing between report date and loss date`,
  ],
]);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

3) Add deterministic pre-checks before the LLM

Do not send every claim straight to the model. Use basic rules first; they are cheaper, explainable, and easier to defend in audits.

function ruleBasedFlags(claim: ClaimInput): string[] {
  const flags: string[] = [];

  const lossDate = new Date(claim.lossDate).getTime();
  const reportDate = new Date(claim.reportDate).getTime();
  const daysToReport = (reportDate - lossDate) / (1000 * 60 * 60 * 24);

  if (claim.daysSincePolicyInception < 30) {
    flags.push("loss occurred shortly after policy inception");
  }
  if (claim.priorClaims12M >= 3) {
    flags.push("multiple prior claims in last 12 months");
    }
   if (daysToReport > 14) {
    flags.push("delayed reporting window");
   }
   if (claim.claimAmount > 50000 && claim.incidentType === "property") {
     flags.push("high-value property claim");
   }

   return flags;
}

4) Compose the LangChain pipeline and execute it

This is the actual agent flow: validate input, attach rule-based context, call the model through a chain built with RunnableSequence, then parse structured output.

import { RunnableSequence } from "@langchain/core/runnables";

const fraudChain = RunnableSequence.from([
  async (input: ClaimInput) => {
    const validated = ClaimSchema.parse(input);
    const flags = ruleBasedFlags(validated);

    return {
      claimJson: JSON.stringify({
        ...validated,
        ruleFlags: flags,
      }),
    };
  },
  
]);

---

## Keep learning

- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies

*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides