How to Build a underwriting Agent Using LangChain in TypeScript for pension funds
An underwriting agent for pension funds takes incoming member, employer, and scheme data, checks it against policy and regulatory rules, and produces a recommendation: accept, request more information, or escalate to a human reviewer. It matters because pension underwriting is not just risk scoring; it is compliance-heavy decision support where auditability, consistency, and data handling rules are non-negotiable.
Architecture
- •
Input ingestion layer
- •Accepts structured application data: employer details, contribution history, scheme type, jurisdiction, and supporting documents.
- •Normalizes fields before they reach the model.
- •
Policy and rules engine
- •Encodes pension-specific underwriting rules outside the model.
- •Handles hard constraints like missing KYC fields, sanctions flags, or residency restrictions.
- •
LangChain reasoning layer
- •Uses
ChatOpenAIwith a tool-calling agent to reason over the case. - •Produces a structured recommendation instead of free-form text.
- •Uses
- •
Document retrieval layer
- •Uses
MemoryVectorStoreor a production vector DB to fetch policy excerpts, scheme rules, and internal underwriting guidance. - •Keeps the model grounded in approved content.
- •Uses
- •
Audit logging layer
- •Stores inputs, retrieved documents, tool calls, final decision, and model version.
- •Supports internal audit and regulator review.
- •
Human escalation path
- •Routes edge cases to an underwriter when confidence is low or policy exceptions are detected.
Implementation
1) Define the underwriting input and output shapes
Use explicit types. Pension workflows fail when you let raw JSON drift across services.
import { z } from "zod";
export const UnderwritingInputSchema = z.object({
schemeName: z.string(),
jurisdiction: z.string(),
employerName: z.string(),
employerCountry: z.string(),
annualContributions: z.number(),
memberCount: z.number(),
kycComplete: z.boolean(),
sanctionsScreened: z.boolean(),
residencyConfirmed: z.boolean(),
});
export const UnderwritingDecisionSchema = z.object({
decision: z.enum(["approve", "reject", "escalate"]),
riskScore: z.number().min(0).max(100),
reasons: z.array(z.string()),
requiredActions: z.array(z.string()),
});
export type UnderwritingInput = z.infer<typeof UnderwritingInputSchema>;
export type UnderwritingDecision = z.infer<typeof UnderwritingDecisionSchema>;
2) Build a retrieval-backed agent with LangChain tools
This pattern keeps policy text out of prompts and makes decisions traceable. The agent can call a retrieval tool for approved underwriting guidance.
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIToolsAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
const policyDocs = [
new Document({
pageContent:
"Reject if KYC is incomplete or sanctions screening is false. Escalate if residency is unconfirmed.",
metadata: { source: "pension-underwriting-policy", section: "controls" },
}),
];
const vectorStore = await MemoryVectorStore.fromDocuments(
policyDocs,
new OpenAIEmbeddings()
);
const retrievePolicyTool = new DynamicStructuredTool({
name: "retrieve_policy",
description: "Fetch approved pension underwriting policy excerpts.",
schema: z.object({ query: z.string() }),
func: async ({ query }) => {
const results = await vectorStore.similaritySearch(query, 3);
return results.map((d) => `${d.metadata.source}: ${d.pageContent}`).join("\n");
},
});
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are an underwriting assistant for pension funds. Follow policy exactly."],
]);
const agent = await createOpenAIToolsAgent({
llm,
tools: [retrievePolicyTool],
});
3) Add deterministic pre-checks before the LLM runs
Do not ask the model to discover hard compliance failures. Enforce them in code first.
function hardFailChecks(input: UnderwritingInput): UnderwritingDecision | null {
if (!input.kycComplete) {
return {
decision: "reject",
riskScore: 95,
reasons: ["KYC incomplete"],
requiredActions: ["Collect missing KYC documents"],
};
}
if (!input.sanctionsScreened) {
return {
decision: "reject",
riskScore: 100,
reasons: ["Sanctions screening not completed"],
requiredActions: ["Run sanctions screening before resubmission"],
};
}
if (!input.residencyConfirmed) {
return {
decision: "escalate",
riskScore: 70,
reasons: ["Residency not confirmed"],
requiredActions: ["Manual review for data residency and eligibility checks"],
};
}
return null;
}
4) Execute the agent and force structured output
Use AgentExecutor for orchestration and then validate the result with Zod before writing anything downstream.
export async function underwritePensionCase(inputRaw: unknown) {
const input = UnderwritingInputSchema.parse(inputRaw);
const blocked = hardFailChecks(input);
if (blocked) return blocked;
const executor = new AgentExecutor({
agent,
tools:[retrievePolicyTool],
});
const result = await executor.invoke({
input:`Underwrite this pension fund case:
scheme=${input.schemeName}
jurisdiction=${input.jurisdiction}
employer=${input.employerName}
annualContributions=${input.annualContributions}
memberCount=${input.memberCount}
kycComplete=${input.kycComplete}
sanctionsScreened=${input.sanctionsScreened}
residencyConfirmed=${input.residencyConfirmed}
Return a concise recommendation with reasons.`,
});
const parsed = UnderwritingDecisionSchema.parse(result.output);
return parsed;
}
Production Considerations
- •Deploy in-region
The agent will process personal and employer data tied to pension administration. Keep inference, logs, embeddings, and backups inside approved jurisdictions to satisfy data residency requirements.
- •Log every decision path
Persist input hashes, retrieved policy snippets, model name, prompt version, tool calls, and final decision. For pension funds, audit trails matter as much as the answer itself.
- •Separate hard controls from soft reasoning
KYC failures, sanctions mismatches, and residency issues should be deterministic code paths. The model should handle judgment calls only after those controls pass.
- •Add human override thresholds
Escalate cases with low confidence scores, conflicting documents, or policy exceptions. Pension underwriting should never be fully autonomous on edge cases.
Common Pitfalls
- •Letting the model decide on mandatory controls
If KYC or sanctions screening are missing, reject or escalate in code. Do not rely on prompt instructions to enforce compliance.
- •Using unbounded free-text outputs
Free-form answers are hard to validate and impossible to audit cleanly. Force structured output with Zod schemas and reject malformed responses.
- •Ignoring document provenance
If you retrieve policy content without metadata like source version and section ID, audits get messy fast. Store provenance with every retrieved chunk and every final recommendation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit