How to Build a claims processing Agent Using LangChain in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
claims-processinglangchaintypescriptfintech

A claims processing agent for fintech takes a customer-submitted claim, extracts the relevant facts, checks policy or product rules, routes the case to the right workflow, and drafts a decision or request for more evidence. It matters because claims handling is one of the highest-volume, highest-risk operations in fintech: every manual review adds cost, delay, and compliance exposure.

Architecture

  • Ingress layer

    • Receives claim payloads from API, webhook, or internal queue.
    • Normalizes input into a stable schema before any LLM call.
  • Document extraction layer

    • Pulls structured fields from claim notes, receipts, KYC artifacts, chargeback evidence, or transaction logs.
    • Uses LangChain’s ChatOpenAI with structured output so downstream logic gets typed data.
  • Policy/rules engine

    • Applies deterministic checks for eligibility, thresholds, exclusions, and jurisdiction-specific rules.
    • Keeps hard compliance logic out of the model.
  • Decision orchestration

    • Uses a LangChain RunnableSequence or RunnableBranch to decide whether to auto-approve, reject, or escalate.
    • Calls tools for ledger lookup, transaction verification, and case management.
  • Audit and evidence store

    • Persists prompts, outputs, tool calls, model version, timestamps, and final decision.
    • Required for auditability and dispute handling in fintech.
  • Human review handoff

    • Escalates low-confidence or high-risk cases to an analyst queue.
    • Preserves all extracted evidence and the model rationale.

Implementation

1) Define the claim schema and model client

Start by forcing structure. In claims workflows, free-form text is the fastest way to create brittle downstream code.

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";

const ClaimSchema = z.object({
  claimId: z.string(),
  customerId: z.string(),
  amount: z.number().positive(),
  currency: z.string().length(3),
  category: z.enum(["fraud", "chargeback", "failed_transfer", "billing_error"]),
  description: z.string(),
  country: z.string(),
});

const ExtractionSchema = z.object({
  summary: z.string(),
  riskLevel: z.enum(["low", "medium", "high"]),
  missingEvidence: z.array(z.string()),
  recommendedAction: z.enum(["approve", "reject", "escalate"]),
});

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

Use temperature: 0 for operational consistency. Claims systems need repeatable outputs more than creative ones.

2) Extract structured facts from the claim

LangChain’s withStructuredOutput() gives you typed output that you can validate before any business decision runs.

async function extractClaimFacts(input: unknown) {
  const claim = ClaimSchema.parse(input);

  const extractor = llm.withStructuredOutput(ExtractionSchema);

  const result = await extractor.invoke([
    {
      role: "system",
      content:
        "You are a claims processing assistant for a fintech platform. Extract only factual information relevant to decisioning.",
    },
    {
      role: "user",
      content: JSON.stringify(claim),
    },
    {
      role: "user",
      content:
        "Return a concise summary, assess risk based on missing evidence or suspicious wording, and recommend one action.",
    },
  ]);

  return { claim, extraction: result };
}

This pattern keeps the LLM on a short leash. The model interprets text; your code owns policy.

3) Add deterministic policy checks and branching

Use normal TypeScript for rules that must never drift. Then branch into approval or escalation using LangChain runnables.

import { RunnableBranch, RunnableLambda } from "@langchain/core/runnables";

function applyPolicy(claim: z.infer<typeof ClaimSchema>, extraction: z.infer<typeof ExtractionSchema>) {
  const requiresReview =
    claim.amount > 5000 ||
    extraction.riskLevel === "high" ||
    extraction.missingEvidence.length > 0 ||
    claim.country !== "SG"; // example residency rule

  if (requiresReview) {
    return { status: "escalate" as const };
  }

  return { status: extraction.recommendedAction };
}

const approveStep = RunnableLambda.from(async ({ claim }: any) => ({
  decision: "approved",
  claimId: claim.claimId,
}));

const escalateStep = RunnableLambda.from(async ({ claim }: any) => ({
  decision: "escalated",
  claimId: claim.claimId,
}));

const decisionFlow = RunnableBranch.from([
  [
    (input: any) => input.policy.status === "approve",
    approveStep,
  ],
]).default(escalateStep);

export async function processClaim(input: unknown) {
return await extractClaimFacts(input).then(async ({ claim, extraction }) => {
    const policy = applyPolicy(claim, extraction);
    return decisionFlow.invoke({ claim, extraction, policy });
});
}

In production you would replace the placeholder approval/escalation steps with tool calls to your case management system. The important part is that branching is explicit and testable.

4) Persist audit data before returning a decision

Fintech claims need traceability. Store what the model saw and what it returned so compliance teams can reconstruct every action later.

type AuditRecord = {
  claimId: string;
  modelName?: string;
decision?: string;
inputHash?: string;
extraction?: unknown;
policy?: unknown;
createdAt: string;
};

async function writeAudit(record: AuditRecord) {
  
console.log("AUDIT", JSON.stringify(record));
}

export async function handleClaim(input: unknown) {
   const { claim, extraction } = await extractClaimFacts(input);
   const policy = applyPolicy(claim as any, extraction as any);
   const decision = await decisionFlow.invoke({ claim, extraction, policy });

   await writeAudit({
     claimId: (claim as any).claimId,
     modelName: "gpt-4o-mini",
     decision,
     extraction,
     policy,
     createdAt: new Date().toISOString(),
   });

   return decision;
}

If you already have Kafka or Postgres in place, replace console.log with durable storage. The key requirement is immutable auditability.

Production Considerations

  • Data residency

    • Keep PII and financial data inside approved regions.
    • If your provider supports regional endpoints, pin them explicitly and document where prompts are processed.
  • Compliance controls

  • Separate deterministic policy logic from LLM reasoning.

  • Enforce approval thresholds for high-value claims.

  • Log every tool call and final disposition for audit review.

  • Monitoring

  • Track extraction accuracy by label set.

  • Monitor escalation rate, false approvals, latency p95/p99.

  • Alert on schema validation failures and unexpected null fields.

  • Guardrails

  • Reject inputs that contain unsupported document types or missing identity anchors.

  • Redact account numbers before sending text to the model.

  • Use allowlisted tools only; never let the agent call arbitrary endpoints.

Common Pitfalls

  1. Letting the model make policy decisions

    • Fix this by keeping eligibility rules in TypeScript or a rules engine.
    • The LLM should extract facts and draft reasoning; it should not own compliance logic.
  2. Skipping schema validation

    • Fix this by parsing inputs with zod before invocation and validating outputs from withStructuredOutput().
    • Invalid shapes should fail closed and route to human review.
  3. Ignoring audit requirements

    • Fix this by storing prompt inputs, extracted fields, tool results, timestamps, model version, and final outcome.
    • In fintech disputes, if you cannot explain why a claim was approved or rejected, you do not have a production system.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides