How to Build a claims processing Agent Using LangGraph in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
claims-processinglanggraphtypescriptlending

A claims processing agent for lending takes an incoming borrower claim, validates the request against policy and loan data, routes it through the right checks, and produces a decision package for an adjuster or operations team. It matters because lending claims are not just workflow problems; they are compliance, audit, and customer-impact problems, and bad automation here creates regulatory risk fast.

Architecture

Build this agent as a small graph with explicit state, not a single prompt chain.

  • Claim intake node

    • Normalizes raw claim payloads from web, email, or internal case systems.
    • Extracts borrower ID, loan ID, claim type, incident date, and supporting docs.
  • Eligibility validation node

    • Checks policy rules: active loan status, coverage window, required documents, and jurisdiction.
    • Rejects or flags incomplete claims before any LLM reasoning.
  • Evidence analysis node

    • Summarizes documents and extracts structured facts from PDFs, notes, and correspondence.
    • Keeps the model on a narrow schema so it cannot invent facts.
  • Decision node

    • Produces approve / deny / needs-review outcomes with reason codes.
    • Uses deterministic rules first, then LLM assistance only where classification is ambiguous.
  • Audit logging node

    • Writes every state transition, input hash, and decision rationale to an immutable audit store.
    • This is mandatory for lending teams that need traceability for disputes and regulators.
  • Escalation node

    • Routes edge cases to a human reviewer when confidence is low or policy conflicts exist.
    • Prevents automated denial on incomplete evidence.

Implementation

1. Define the state and graph shape

Use Annotation.Root to define typed state. Keep the state small and auditable; do not dump raw documents into every node.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";

type ClaimDecision = "approve" | "deny" | "needs_review";

const ClaimState = Annotation.Root({
  claimId: Annotation<string>(),
  borrowerId: Annotation<string>(),
  loanId: Annotation<string>(),
  claimType: Annotation<string>(),
  incidentDate: Annotation<string>(),
  jurisdiction: Annotation<string>(),
  documents: Annotation<string[]>(),
  eligibility: Annotation<{
    eligible: boolean;
    reasons: string[];
  }>(),
  evidenceSummary: Annotation<string>(),
  decision: Annotation<ClaimDecision>(),
  reasonCodes: Annotation<string[]>(),
});

type ClaimStateType = typeof ClaimState.State;

2. Add deterministic validation before any model call

For lending workflows, hard rules should fail fast. If the loan is inactive or the jurisdiction is unsupported, do not spend tokens trying to “reason” around it.

const validateEligibility = async (state: ClaimStateType) => {
  const reasons: string[] = [];

  if (!state.loanId) reasons.push("MISSING_LOAN_ID");
  if (!state.borrowerId) reasons.push("MISSING_BORROWER_ID");
  if (!state.documents?.length) reasons.push("MISSING_SUPPORTING_DOCUMENTS");

  const supportedJurisdictions = ["US", "CA", "UK"];
  if (!supportedJurisdictions.includes(state.jurisdiction)) {
    reasons.push("UNSUPPORTED_JURISDICTION");
  }

  return {
    eligibility: {
      eligible: reasons.length === 0,
      reasons,
    },
    decision: reasons.length === 0 ? "needs_review" : "deny",
    reasonCodes: reasons,
  };
};

3. Add an evidence summarizer with ChatOpenAI and llm.invoke

This is where LangGraph earns its keep. The model should only summarize evidence into a bounded structure; do not let it decide policy by itself.

import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const summarizeEvidence = async (state: ClaimStateType) => {
  const prompt = `
You are summarizing a lending claim for internal review.
Return only concise factual observations.
Do not speculate.
Claim type: ${state.claimType}
Incident date: ${state.incidentDate}
Documents:
${state.documents.map((d) => `- ${d}`).join("\n")}
`;

  const result = await llm.invoke([new HumanMessage(prompt)]);

  return {
    evidenceSummary:
      typeof result.content === "string" ? result.content : JSON.stringify(result.content),
    reasonCodes: [...(state.reasonCodes ?? []), "EVIDENCE_SUMMARIZED"],
    decision: state.decision ?? "needs_review",
  };
};

4. Wire the graph with conditional routing

Use StateGraph, addNode, addEdge, and addConditionalEdges. Keep routing explicit so reviewers can inspect why a claim moved forward or stopped.

const routeAfterValidation = (state: ClaimStateType) => {
  if (!state.eligibility?.eligible) return "end";
  return "summarizeEvidence";
};

const routeToFinalDecision = (state: ClaimStateType) => {
   if (state.evidenceSummary.includes("insufficient")) return "needsReview";
   return "finalize";
};

const buildClaimGraph = () => {
   const graph = new StateGraph(ClaimState)
     .addNode("validateEligibility", validateEligibility)
     .addNode("summarizeEvidence", summarizeEvidence)
     .addNode("finalize", async (state) => ({
       decision: state.decision === "needs_review" ? "needs_review" : state.decision,
       reasonCodes: [...(state.reasonCodes ?? []), "FINALIZED"],
     }))
     .addNode("needsReview", async () => ({
       decision: "needs_review",
       reasonCodes: ["ESCALATED_TO_HUMAN_REVIEW"],
     }))
     .addEdge(START, "validateEligibility")
     .addConditionalEdges("validateEligibility", routeAfterValidation, {
       summarizeEvidence: "summarizeEvidence",
       end: END,
     })
     .addConditionalEdges("summarizeEvidence", routeToFinalDecision, {
       finalize: "finalize",
       needsReview: "needsReview",
     })
     .addEdge("finalize", END)
     .addEdge("needsReview", END);

   return graph.compile();
};

export const claimAgent = buildClaimGraph();

Production Considerations

  • Auditability

    • Persist every input snapshot, output snapshot, and route taken through the graph.
    • Store a hash of source documents so disputes can be reconstructed later without reprocessing everything.
  • Data residency

    • Keep borrower PII in-region and use regional model endpoints where required.
    • For cross-border lending portfolios, separate graphs by jurisdiction instead of sharing one global runtime.
  • Compliance guardrails

    • Hard-block decisions on missing mandatory fields like consent artifacts or signed forms.
    • Add policy checks for adverse action language so denials map to approved reason codes only.
  • Monitoring

    • Track escalation rate, denial rate by jurisdiction, average time in each node, and human override rate.
    • A spike in needs_review often means upstream document extraction is failing or policy rules changed.

Common Pitfalls

  1. Letting the LLM make policy decisions

    • Bad pattern: “decide approve/deny from raw docs.”
    • Fix it by separating deterministic eligibility checks from narrative summarization and final human review.
  2. Overloading graph state with raw files

    • Passing full PDFs through every node makes latency worse and leaks sensitive data across steps.
    • Keep state to IDs, extracted facts, summaries, and decision metadata. Store originals in secure object storage.
  3. Ignoring jurisdiction-specific rules

    • Lending claims vary by region on disclosure timing, retention periods, and adverse action requirements.
    • Build jurisdiction into the routing logic early so US claims do not share the same path as UK or CA claims unless the policy layer explicitly supports it.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides