How to Build a underwriting Agent Using LangGraph in TypeScript for pension funds
An underwriting agent for pension funds automates the first pass on member, employer, and contribution data to decide whether an application is complete, compliant, and within policy. It matters because pension underwriting is not just a scoring problem; it is a regulated decision workflow where auditability, data residency, and policy consistency matter as much as accuracy.
Architecture
A production underwriting agent for pension funds usually needs these components:
- •
Input normalizer
- •Cleans up application payloads from portals, brokers, or batch uploads.
- •Maps inconsistent field names into a canonical schema.
- •
Policy rules engine
- •Encodes pension fund-specific checks like eligibility, contribution limits, employer status, and jurisdiction rules.
- •Keeps deterministic decisions out of the LLM.
- •
Document verifier
- •Checks supporting documents such as identity proofs, payroll extracts, trust deeds, and signed forms.
- •Flags missing or stale documents before any recommendation is made.
- •
LLM reasoning node
- •Summarizes exceptions, explains why an application needs manual review, and drafts a decision memo.
- •Must only operate on already-filtered data.
- •
Human review handoff
- •Routes borderline or high-risk cases to an underwriter.
- •Preserves full trace of what the agent saw and why it escalated.
- •
Audit logger
- •Stores inputs, rule outcomes, model outputs, and final decisions.
- •Required for compliance reviews and internal model governance.
Implementation
1. Define the state and install the graph shape
For this workflow, use a typed state that carries the application data, rule results, document status, and final recommendation. LangGraph works well here because each step is explicit and inspectable.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
type Application = {
applicantId: string;
fundId: string;
jurisdiction: "ZA" | "UK" | "EU";
annualContribution: number;
employerRegistered: boolean;
documents: string[];
};
const UnderwritingState = Annotation.Root({
application: Annotation<Application>(),
ruleFindings: Annotation<string[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
documentFindings: Annotation<string[]>({
default: () => [],
reducer: (left, right) => [...left, ...right],
}),
recommendation: Annotation<string | null>({
default: () => null,
reducer: (_, right) => right,
}),
});
2. Add deterministic policy checks before any LLM call
Do not ask the model to decide whether a pension contribution breaches policy. That belongs in code. Keep the checks explicit so auditors can replay them later.
const checkPolicy = async (state: typeof UnderwritingState.State) => {
const findings: string[] = [];
const app = state.application;
if (!app.employerRegistered) {
findings.push("Employer is not registered.");
}
if (app.jurisdiction === "ZA" && app.annualContribution > 350000) {
findings.push("Annual contribution exceeds South African threshold.");
}
if (app.jurisdiction === "UK" && app.annualContribution > 60000) {
findings.push("Annual contribution exceeds UK allowance.");
}
return { ruleFindings: findings };
};
const verifyDocuments = async (state: typeof UnderwritingState.State) => {
const required = ["id", "proof_of_employment", "signed_consent"];
const missing = required.filter((doc) => !state.application.documents.includes(doc));
return {
documentFindings:
missing.length > 0 ? [`Missing documents: ${missing.join(", ")}`] : ["All required documents present."],
};
};
3. Use the LLM only for explanation and escalation wording
The model should not override hard rules. It should summarize the findings into a decision memo that an underwriter can review quickly.
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});
const draftRecommendation = async (state: typeof UnderwritingState.State) => {
const hasIssues =
state.ruleFindings.length > 0 ||
state.documentFindings.some((d) => d.startsWith("Missing"));
if (!hasIssues) {
return { recommendation: "APPROVE_FOR_AUTO_PROCESSING" };
}
}
const draftRecommendation2 = async (state: typeof UnderwritingState.State) => {
const prompt = `
You are assisting a pension fund underwriter.
Write a short memo recommending either MANUAL_REVIEW or REJECT based only on these findings:
Rules:
${state.ruleFindings.map((x) => `- ${x}`).join("\n")}
Documents:
${state.documentFindings.map((x) => `- ${x}`).join("\n")}
Keep it factual and compliance-oriented.
`;
const response = await llm.invoke(prompt);
return { recommendation: response.content.toString() };
};
Step implementation note
The two-step split above is intentional:
- •deterministic checks first
- •LLM explanation second
That pattern keeps regulated logic outside probabilistic systems.
Step implementation note with graph wiring
Now wire the nodes with StateGraph. The graph can route directly to END after producing the recommendation.
const graph = new StateGraph(UnderwritingState)
.addNode("checkPolicy", checkPolicy)
.addNode("verifyDocuments", verifyDocuments)
.addNode("draftRecommendation", draftRecommendation2)
.addEdge(START, "checkPolicy")
.addEdge("checkPolicy", "verifyDocuments")
.addEdge("verifyDocuments", "draftRecommendation")
.addEdge("draftRecommendation", END);
export const underwritingAgent = graph.compile();
To run it:
const result = await underwritingAgent.invoke({
application: {
applicantId: "A123",
fundId: "PF001",
jurisdiction: "ZA",
annualContribution: 280000,
employerRegistered: true,
documents: ["id", "proof_of_employment", "signed_consent"],
},
});
console.log(result.recommendation);
console.log(result.ruleFindings);
console.log(result.documentFindings);
Production Considerations
- •
Keep sensitive data in-region
- •Pension fund member data often has residency constraints.
- •Deploy your model endpoint and graph runtime in the same jurisdiction as the fund’s approved storage boundary.
- •
Log every decision input
- •Store the raw application snapshot, rule outputs, document checks, prompt text version, model version, and final recommendation.
- •
Put hard stops around policy breaches
- •If contribution thresholds or eligibility rules fail, do not let the LLM soften that outcome.
- •
Add human approval for exceptions
- •Anything with missing consent forms, ambiguous employment status, or cross-border contributions should route to manual review.
Common Pitfalls
- •
Letting the LLM make compliance decisions
- •Bad pattern: asking the model whether a contribution is allowed.
- •Fix it by enforcing rules in TypeScript before any model call.
- •
Not preserving audit traces
- •Bad pattern: storing only the final answer.
- •Fix it by persisting each node output with timestamps and versions.
- •
Ignoring jurisdiction-specific thresholds
- •Bad pattern: one global underwriting policy for all funds.
- •Fix it by parameterizing rules per jurisdiction and per fund mandate.
- •
Sending unnecessary PII to the model
- •Bad pattern: passing full identity numbers or bank details into prompts.
- •Fix it by redacting fields before
llm.invoke()and keeping only what is needed for reasoning.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit