How to Build a underwriting Agent Using LlamaIndex in TypeScript for pension funds
An underwriting agent for pension funds takes member, employer, scheme, and market data, then produces a risk-informed recommendation: approve, reject, request more evidence, or route to a human underwriter. For pension funds, that matters because decisions are constrained by compliance, auditability, data residency, and the need to treat member outcomes consistently.
Architecture
- •
Document ingestion layer
- •Pulls PDFs, policy docs, trustee minutes, actuarial reports, and scheme rules into a searchable index.
- •Normalizes scanned documents with OCR before indexing.
- •
Policy and rules retrieval
- •Uses LlamaIndex retrieval to fetch the exact clauses that govern underwriting decisions.
- •Keeps scheme-specific rules separate from general underwriting guidance.
- •
Decision engine
- •Combines retrieved context with structured member inputs.
- •Produces a recommendation plus rationale and confidence signals.
- •
Audit trail store
- •Persists the input payload, retrieved chunks, model response, and final decision.
- •Required for internal review and regulator-facing evidence.
- •
Human review queue
- •Routes low-confidence or high-risk cases to an underwriter.
- •Prevents the agent from making unsupported exceptions.
- •
Compliance guardrails
- •Enforces redaction, jurisdiction checks, and data residency constraints before any model call.
- •Blocks disallowed fields from leaving approved regions.
Implementation
1) Install the LlamaIndex TypeScript packages
Use the TypeScript SDK and a chat model provider. For production work in pension funds, keep your dependencies explicit and pin versions.
npm install llamaindex zod
Set your model key in the environment:
export OPENAI_API_KEY="your-key"
2) Build a schema for underwriting input
Keep the agent on structured inputs. Pension underwriting is not free-form chat; it is a controlled workflow with traceable fields.
import { z } from "zod";
export const UnderwritingInputSchema = z.object({
schemeId: z.string(),
employerName: z.string(),
memberAge: z.number().int().min(18).max(120),
contributionHistoryMonths: z.number().int().min(0),
salaryBand: z.enum(["low", "medium", "high"]),
medicalDisclosureProvided: z.boolean(),
countryOfResidence: z.string(),
});
export type UnderwritingInput = z.infer<typeof UnderwritingInputSchema>;
3) Create a retrieval index over scheme rules
This example uses VectorStoreIndex, Document, Settings, OpenAI, and QueryEngineTool. The pattern is simple: load policy text once, query it at runtime, then feed the retrieved context into the decision prompt.
import {
Document,
VectorStoreIndex,
Settings,
OpenAI,
} from "llamaindex";
Settings.llm = new OpenAI({
model: "gpt-4o-mini",
});
const policyDocs = [
new Document({
text: `
Scheme A underwriting rules:
- Medical disclosures are mandatory for members over age 55.
- Employers with less than 12 months contribution history require manual review.
- High salary band cases require evidence of source-of-funds checks.
- Do not store personal health data outside approved EU regions.
`,
metadata: { schemeId: "scheme-a", docType: "policy" },
}),
];
async function buildPolicyIndex() {
return await VectorStoreIndex.fromDocuments(policyDocs);
}
async function getPolicyContext(
index: VectorStoreIndex,
schemeId: string,
): Promise<string> {
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query:
`Return only underwriting rules relevant to ${schemeId}. Focus on mandatory checks and escalation criteria.`,
});
return response.toString();
}
4) Run the underwriting decision with an explicit prompt
The key pattern is not “ask the model what it thinks.” It is “give it policy context plus structured facts, then force a constrained output.” Use index.asQueryEngine() for retrieval and Settings.llm.complete() for the final decision step.
import { UnderwritingInputSchema } from "./schema";
import { buildPolicyIndex, getPolicyContext } from "./policy-index";
import { Settings } from "llamaindex";
type Decision = {
outcome: "approve" | "reject" | "manual_review";
rationale: string;
};
async function underwrite(inputRaw: unknown): Promise<Decision> {
const input = UnderwritingInputSchema.parse(inputRaw);
if (input.countryOfResidence !== "IE" && input.countryOfResidence !== "DE") {
return {
outcome: "manual_review",
rationale: "Cross-border data residency check required before automated processing.",
};
}
const index = await buildPolicyIndex();
const policyContext = await getPolicyContext(index, input.schemeId);
const prompt = `
You are an underwriting assistant for a pension fund.
Use only the policy context below and the structured case data.
If information is missing or policy requires escalation, choose manual_review.
POLICY CONTEXT:
${policyContext}
CASE DATA:
${JSON.stringify(input, null, 2)}
Return JSON with keys:
- outcome: approve | reject | manual_review
- rationale: short explanation referencing policy rules
`;
const response = await Settings.llm.complete(prompt);
const text = response.text.trim();
return JSON.parse(text) as Decision;
}
async function main() {
const result = await underwrite({
schemeId: "scheme-a",
employerName: "Northwind Services",
memberAge: 58,
contributionHistoryMonths: 8,
salaryBand: "high",
medicalDisclosureProvided: false,
countryOfResidence: "IE",
});
console.log(result);
}
main().catch(console.error);
Production Considerations
- •
Data residency
- •Keep indexing and inference in approved regions only.
- •If member health or financial data crosses borders, force manual review or use regional deployment boundaries.
- •
Auditability
- •Persist every request with schema-valid input, retrieved policy snippets, model output, timestamp, and operator identity.
- •Store immutable records so compliance can reconstruct why a decision was made.
- •
Monitoring
- •Track approval rates by scheme, escalation rates, latency, and prompt failure rate.
- •Alert on drift when one scheme starts producing materially different outcomes from historical baselines.
- •
Guardrails
- •Redact personal health details unless they are explicitly needed for the case.
- •Block unsupported recommendations like “approve because likely low risk” when policy requires documented evidence.
Common Pitfalls
- •
Using open-ended prompts instead of structured cases
- •This causes inconsistent outputs and weak audit trails.
- •Fix it by validating inputs with
zodand forcing JSON output.
- •
Mixing all schemes into one undifferentiated index
- •Pension funds usually have scheme-specific rules.
- •Fix it by partitioning indexes per scheme or per jurisdiction so retrieval stays precise.
- •
Letting the model decide outside policy boundaries
- •The agent should not invent exceptions or override mandatory checks.
- •Fix it by hard-coding escalation rules before any LLM call and requiring manual review when policy coverage is incomplete.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit