How to Build a claims processing Agent Using LangChain in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
claims-processinglangchaintypescriptpension-funds

A claims processing agent for pension funds takes incoming claim packets, checks them against policy rules, extracts the required fields, flags missing evidence, and routes the case to approval or human review. It matters because pension claims are high-trust workflows: mistakes create compliance risk, delay member payouts, and leave a clean audit trail problem for operations and regulators.

Architecture

  • Document ingestion layer

    • Accepts PDFs, scanned forms, emails, and attachments.
    • Normalizes files into text chunks with metadata like memberId, claimId, and source system.
  • Extraction chain

    • Uses LangChain to pull structured fields from claim documents.
    • Extracts things like identity details, employment history, benefit type, dates, and supporting evidence.
  • Rules and eligibility checker

    • Applies pension-specific policy logic outside the LLM.
    • Validates age thresholds, vesting rules, death benefit requirements, disability evidence, and jurisdiction-specific constraints.
  • Decision router

    • Sends low-risk claims to auto-processing.
    • Escalates ambiguous or incomplete claims to a human caseworker.
  • Audit logger

    • Stores inputs, model outputs, rule decisions, timestamps, and reviewer actions.
    • Required for traceability during internal audits and regulator reviews.
  • Secure data boundary

    • Enforces residency and access controls.
    • Keeps member data inside approved regions and redacts sensitive fields before model calls where needed.

Implementation

1. Install the core packages

For this pattern you want LangChain plus a model provider package. The example below uses OpenAI-compatible chat models through LangChain’s current API.

npm install langchain @langchain/openai zod

Set your environment variables before running anything:

export OPENAI_API_KEY="your-key"

2. Define the claim schema and extraction chain

Use a strict schema so the model returns structured output you can validate before any business decision is made. In pension workflows, this is non-negotiable because free-form text is not an auditable contract.

import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import { PromptTemplate } from "@langchain/core/prompts";

const ClaimSchema = z.object({
  claimId: z.string(),
  memberId: z.string(),
  claimType: z.enum(["retirement", "death_benefit", "disability", "refund"]),
  dateOfBirth: z.string(),
  employmentEndDate: z.string().nullable(),
  documentsProvided: z.array(z.string()),
  missingDocuments: z.array(z.string()),
});

type Claim = z.infer<typeof ClaimSchema>;

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = PromptTemplate.fromTemplate(`
You are extracting pension claim data for a regulated claims workflow.
Return only valid JSON matching this shape:
{
  "claimId": string,
  "memberId": string,
  "claimType": "retirement" | "death_benefit" | "disability" | "refund",
  "dateOfBirth": string,
  "employmentEndDate": string | null,
  "documentsProvided": string[],
  "missingDocuments": string[]
}

Claim packet:
{packet}
`);

export async function extractClaim(packet: string): Promise<Claim> {
  const chain = prompt.pipe(llm);
  const result = await chain.invoke({ packet });
  const parsed = ClaimSchema.parse(JSON.parse(result.content as string));
  return parsed;
}

3. Add deterministic eligibility checks

Do not ask the LLM to decide eligibility. Use code for policy rules and let the model handle extraction and summarization only. That gives you repeatability and makes audit reviews much easier.

function isEligibleForAutoProcessing(claim: Claim): { eligible: boolean; reason?: string } {
  if (claim.missingDocuments.length > 0) {
    return { eligible: false, reason: `Missing documents: ${claim.missingDocuments.join(", ")}` };
    }

  if (!claim.dateOfBirth) {
    return { eligible: false, reason: "Missing date of birth" };
  }

  if (claim.claimType === "disability" && !claim.documentsProvided.includes("medical_report")) {
    return { eligible: false, reason: "Disability claims require medical_report" };
  }

  return { eligible: true };
}

4. Route to auto-approve or human review

This is the actual production pattern: extract, validate, apply rules, then route. The agent should never directly “decide” on a pension payout without policy checks around it.

import { HumanMessage } from "@langchain/core/messages";

const reviewPrompt = PromptTemplate.fromTemplate(`
Summarize this pension claim for a human reviewer in concise operational language.
Include risk flags and missing items only.

Claim JSON:
{claimJson}

Eligibility result:
{eligibilityJson}
`);

export async function processClaim(packet: string) {
  const claim = await extractClaim(packet);
  const eligibility = isEligibleForAutoProcessing(claim);

   if (eligibility.eligible) {
    return {
      status: "auto_process",
      claim,
      eligibility,
      auditEvent: {
        action: "auto_process",
        timestamp: new Date().toISOString(),
      },
    };
   }

   const summaryChain = reviewPrompt.pipe(llm);
   const summary = await summaryChain.invoke({
     claimJson: JSON.stringify(claim),
     eligibilityJson: JSON.stringify(eligibility),
   });

   return {
     status: "human_review",
     claim,
     eligibility,
     reviewerSummary: summary.content,
     auditEvent: {
       action: "human_review",
       timestamp: new Date().toISOString(),
     },
   };
}

Production Considerations

  • Keep member data in-region

    • Pension funds often have hard residency requirements.
    • Deploy model endpoints in approved jurisdictions and block cross-region logging by default.
  • Log every decision point

    • Store raw input hashes, extracted fields, rule outcomes, prompts used, model version, and reviewer actions.
    • If you cannot reconstruct why a claim was routed a certain way, your audit story is weak.
  • Add guardrails around PII

    • Redact national IDs, bank details, medical notes, and beneficiary information before sending text to the model when possible.
    • Use allowlisted fields in prompts instead of dumping entire documents into context.
  • Monitor drift in document formats

    • Claims packets change over time as administrators update forms.
    • Track extraction accuracy by document template version so you catch failures before they hit members.

Common Pitfalls

  • Using the LLM as the final decision engine

    Don’t let the model approve or deny claims directly. Use it for extraction and summarization; keep eligibility logic in deterministic code owned by compliance teams.

  • Skipping schema validation

    If you accept raw JSON without zod validation or equivalent checks, one malformed response can poison downstream processing. Always parse strictly before routing.

  • Ignoring exception handling for partial packets

    Pension claims are often incomplete on first submission. Design for missing documents by returning a clear checklist instead of failing the whole workflow.

If you build it this way, you get an agent that helps operations move faster without turning pension administration into an opaque chatbot problem. The key is simple: LLMs handle language; your code handles policy.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides