How to Build a claims processing Agent Using LlamaIndex in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
claims-processingllamaindextypescriptlending

A claims processing agent for lending handles borrower disputes, payment claims, fee reversals, hardship requests, and document-driven exceptions. It matters because these workflows are expensive when humans do every lookup manually, and risky when the system misses policy rules, compliance constraints, or audit trails.

Architecture

  • Ingress layer

    • Receives claim payloads from a case management API, web form, or back-office queue.
    • Normalizes fields like loanId, claimType, jurisdiction, customerId, and attached documents.
  • Document retrieval layer

    • Uses LlamaIndex to search policy docs, lending terms, servicing playbooks, and regulatory guidance.
    • Pulls only the minimum relevant context for the claim.
  • Decision orchestration layer

    • An agent routes the claim through classification, evidence lookup, and response drafting.
    • Keeps the workflow deterministic where it matters: eligibility checks, deadlines, and policy references.
  • Compliance guardrail layer

    • Enforces loan-product rules, data residency restrictions, and PII redaction.
    • Blocks unsupported decisions and escalates ambiguous cases to a human reviewer.
  • Audit logging layer

    • Stores retrieved sources, model output, timestamps, and final disposition.
    • This is non-negotiable in lending because every recommendation needs traceability.

Implementation

1) Install dependencies and set up your index

Use LlamaIndex’s TypeScript package with a vector store-backed index. For lending use cases, keep your policy corpus separate from customer records so you can control access boundaries.

npm install llamaindex dotenv
import "dotenv/config";
import {
  Document,
  VectorStoreIndex,
  OpenAIEmbedding,
} from "llamaindex";

async function buildIndex() {
  const docs = [
    new Document({
      text: `
        Claim handling policy:
        - Payment reversal requests must be filed within 30 days.
        - Fee disputes require evidence of servicing error.
        - Hardship claims must be escalated if delinquency is over 60 days.
      `,
      metadata: { source: "policy_claims_001", jurisdiction: "US" },
    }),
    new Document({
      text: `
        Lending servicing guide:
        - Always verify loan status before approving any adjustment.
        - Never expose full SSN or bank account numbers in responses.
      `,
      metadata: { source: "servicing_guide_014", jurisdiction: "US" },
    }),
  ];

  const embedModel = new OpenAIEmbedding({
    model: "text-embedding-3-small",
  });

  return await VectorStoreIndex.fromDocuments(docs, {
    embedModel,
  });
}

2) Create a retriever that only returns relevant policy context

For claims processing, you want grounded answers. The agent should cite internal policy passages instead of inventing rules from the prompt.

import { QueryEngineTool } from "llamaindex";

async function createPolicyTool() {
  const index = await buildIndex();
  const queryEngine = index.asQueryEngine({
    similarityTopK: 3,
  });

  return new QueryEngineTool({
    queryEngine,
    metadata: {
      name: "policy_lookup",
      description: "Searches lending claims policies and servicing guidance",
    },
  });
}

3) Build an agent that classifies the claim and drafts a decision

This pattern uses OpenAIAgent with tools. The model can inspect the claim details, retrieve policy context, then produce a structured response. In production you would wrap this with schema validation before persisting anything.

import {
  OpenAI,
  OpenAIAgent,
} from "llamaindex";

type ClaimInput = {
  claimId: string;
  loanId: string;
  claimType: "payment_reversal" | "fee_dispute" | "hardship_request";
  jurisdiction: string;
  summary: string;
};

async function runClaimAgent(input: ClaimInput) {
  const tool = await createPolicyTool();

  const llm = new OpenAI({
    model: "gpt-4o-mini",
    temperature: 0,
  });

  const agent = new OpenAIAgent({
    tools: [tool],
    llm,
    systemPrompt: `
      You are a lending claims assistant.
      Use only retrieved policy context for decisions.
      If the claim is ambiguous or requires judgment beyond policy text,
      escalate to a human reviewer.
      Do not reveal PII. Return concise reasoning with cited policy references.
    `,
  });

  const response = await agent.chat({
    message: `
      Claim ID: ${input.claimId}
      Loan ID: ${input.loanId}
      Claim Type: ${input.claimType}
      Jurisdiction: ${input.jurisdiction}
      Summary: ${input.summary}

      Determine whether this should be approved, denied, or escalated.
    `,
  });

  return response.response;
}

4) Add deterministic post-processing for lending controls

Do not let raw model output hit your case system. Parse it into a fixed shape and enforce business rules after generation.

type Decision = {
  outcome: "approve" | "deny" | "escalate";
  reason: string;
};

function enforceControls(resultText: string): Decision {
  
  

  
  

  
  

  
  

  
  

  
  

  
  

  
  

  
  

  
  

  
  

  

  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  

  

  
 
if (resultText.toLowerCase().includes("approve")) {
    return { outcome: "approve", reason: resultText };
    
}

if (resultText.toLowerCase().includes("deny")) {
    return { outcome: "deny", reason: resultText };
}

return { outcome: "escalate", reason: resultText };
}

Production Considerations

  • Data residency

    • Keep customer data in-region if your lending stack has jurisdictional requirements.
    • Separate policy indexes from borrower records so retrieval does not cross boundaries accidentally.
  • Auditability

    • Log the exact retrieved chunks used for each recommendation.
    • Store claimId, tool outputs, model version, prompt version, and final disposition.
  • Guardrails

    • Redact account numbers, SSNs, addresses, and bank details before sending text to the model.
    • Force escalation when confidence is low or when the claim touches regulated exceptions like adverse action or debt collection disputes.
  • Monitoring

    • Track approval/denial/escalation rates by claim type and jurisdiction.
    • Watch for retrieval failures, hallucinated citations, and spikes in human overrides.

Common Pitfalls

  • Using the agent as a decision engine instead of a decision assistant

    • Fix it by making the agent draft recommendations while hard rules live in code or workflow engines.
  • Mixing customer data with policy documents in one index

    • Fix it by separating corpora and applying access controls at retrieval time.
  • Skipping structured output validation

    • Fix it by parsing into a strict schema before writing to downstream systems. Free-form text is fine for review notes; it is not fine for production state transitions.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides