How to Build a claims processing Agent Using LlamaIndex in TypeScript for pension funds
A claims processing agent for pension funds reads incoming claim packets, extracts the relevant facts, checks them against policy and eligibility rules, and drafts a decision package for human review. It matters because pension claims are high-stakes: errors hit retirees directly, compliance teams need a traceable audit trail, and data handling has to respect residency and confidentiality constraints.
Architecture
Build this agent as a small set of focused components:
- •
Ingestion layer
- •Accepts PDFs, scanned forms, emails, and structured claim metadata.
- •Normalizes everything into text plus metadata like claimant ID, jurisdiction, and submission date.
- •
Document index
- •Stores plan documents, eligibility rules, benefit schedules, and prior claim decisions.
- •Use LlamaIndex to retrieve the exact policy passages used in a recommendation.
- •
Claim extraction workflow
- •Pulls out fields like member name, retirement date, years of service, beneficiary details, and requested benefit type.
- •Keeps extracted values tied to source citations for audit.
- •
Rules and retrieval layer
- •Combines retrieval over plan documents with deterministic checks for pension-specific rules.
- •This is where vesting periods, early retirement penalties, survivor benefit rules, and required evidence are validated.
- •
Decision draft generator
- •Produces a structured recommendation: approve, reject, or request more information.
- •Must include cited evidence and confidence notes for human adjudicators.
- •
Audit and logging layer
- •Persists prompts, retrieved chunks, outputs, and final human decisions.
- •Required for compliance reviews and dispute handling.
Implementation
- •Install LlamaIndex for TypeScript and set up the document index
Use @llamaindex/core plus a vector store integration. For a production setup in a pension environment, keep plan documents in a controlled corpus with explicit metadata.
import {
Document,
VectorStoreIndex,
Settings,
OpenAIEmbedding,
} from "@llamaindex/core";
import { OpenAI } from "@llamaindex/openai";
Settings.llm = new OpenAI({
model: "gpt-4o-mini",
});
Settings.embedModel = new OpenAIEmbedding({
model: "text-embedding-3-small",
});
const docs = [
new Document({
text: `
Pension Plan Rulebook v3:
A member is eligible for normal retirement benefits at age 65
or age 60 with at least 30 years of service.
Early retirement applies a reduction of 4% per year before age 65.
`,
metadata: { source: "rulebook_v3", jurisdiction: "UK" },
}),
new Document({
text: `
Survivor benefit requires certified death certificate
and beneficiary identity verification before payment release.
`,
metadata: { source: "benefits_policy", jurisdiction: "UK" },
}),
];
const index = await VectorStoreIndex.fromDocuments(docs);
const retriever = index.asRetriever({ similarityTopK: 3 });
- •Extract claim facts into a structured object
Keep extraction separate from adjudication. That gives you better auditability and makes it easier to swap models without changing the decision logic.
import { QueryEngineTool } from "@llamaindex/core/tools";
type ClaimFacts = {
claimantName: string;
memberAge: number;
yearsOfService: number;
claimType: "retirement" | "survivor" | "disability";
};
async function extractClaimFacts(rawClaimText: string): Promise<ClaimFacts> {
const prompt = `
Extract these fields from the claim:
claimantName, memberAge, yearsOfService, claimType.
Return JSON only.
Claim:
${rawClaimText}
`;
const response = await Settings.llm.complete(prompt);
return JSON.parse(response.text) as ClaimFacts;
}
- •Retrieve policy evidence and draft the decision
This is the core pattern. Retrieve only the policy passages needed for the claim type, then ask the model to produce a decision package with citations. In pension workflows, that decision package should be reviewable by operations staff without reading raw prompts.
async function processClaim(rawClaimText: string) {
const facts = await extractClaimFacts(rawClaimText);
const query =
facts.claimType === "retirement"
? `normal retirement eligibility early retirement reduction service requirements`
: `beneficiary identity verification death certificate survivor benefit`;
const nodes = await retriever.retrieve(query);
const context = nodes
.map((n) => `SOURCE: ${n.node.metadata?.source}\nTEXT: ${n.node.getContent()}`)
.join("\n\n");
const decisionPrompt = `
You are a claims processing assistant for a pension fund.
Use only the provided policy context.
Claim facts:
${JSON.stringify(facts, null, 2)}
Policy context:
${context}
Return JSON with:
- decision: approve | reject | needs_more_info
- rationale
- cited_sources
- missing_information
`;
const result = await Settings.llm.complete(decisionPrompt);
return JSON.parse(result.text);
}
const sampleClaim = `
John Mercer is applying for normal retirement benefits.
He is age 64 with 31 years of service.
`;
console.log(await processClaim(sampleClaim));
- •Add human review gates before final action
Do not let the agent post directly to payment systems. For pension funds, every non-trivial outcome should go through an adjudicator when confidence is low or when legal thresholds are involved.
function requiresHumanReview(decision: any): boolean {
if (decision.decision === "needs_more_info") return true;
if (decision.cited_sources?.length < 1) return true;
return false;
}
Production Considerations
- •
Data residency
- •Keep indexes inside the required jurisdiction if plan rules or member data cannot leave region boundaries.
- •If you use hosted models or embeddings, verify where prompts and vectors are processed and stored.
- •
Auditability
- •Persist retrieved nodes, final prompts, model outputs, and operator overrides.
- •Pension disputes often hinge on “why was this claim approved or denied,” so citations are not optional.
- •
Guardrails
- •Enforce deterministic checks for age thresholds, service periods, document completeness, and identity verification before any model-generated recommendation is accepted.
- •Treat the LLM as an assistant to policy interpretation, not as the source of truth.
- •
Monitoring
- •Track rejection rates by claim type, missing-document frequency, average time-to-decision, and human override rates.
- •Sudden changes usually mean either prompt drift or upstream document quality issues.
Common Pitfalls
- •
Using retrieval alone for eligibility
Retrieval can surface relevant policy text, but it will not reliably compute service credits or apply date arithmetic. Put hard business rules in code.
- •
Skipping provenance on outputs
If your agent returns “approve” without citing which rulebook section it used, your operations team will end up re-reading documents manually. Always attach source metadata to every recommendation.
- •
Mixing personal data into broad prompts
Pension claims contain sensitive identifiers and family information. Minimize prompt scope by sending only the fields needed for that step, then redact logs before storage.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit