How to Build a underwriting Agent Using AutoGen in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
underwritingautogentypescriptpension-funds

A underwriting agent for pension funds reviews member, employer, and plan data, then produces a risk decision with evidence. It matters because pension underwriting is not just about speed; it has to respect compliance, auditability, data residency, and the fact that bad decisions can affect long-term retirement liabilities.

Architecture

Build this agent as a small workflow, not a single prompt.

  • Input normalizer

    • Converts raw pension application data into a strict TypeScript schema.
    • Validates fields like employer contribution history, plan type, jurisdiction, and funding status.
  • Policy retrieval layer

    • Pulls internal underwriting rules, trustee policy notes, and regulatory guidance.
    • Keeps the agent grounded in approved pension-fund policy instead of free-form reasoning.
  • Multi-agent decision loop

    • One agent analyzes risk.
    • Another checks compliance and missing evidence.
    • A final agent drafts the recommendation and cites the inputs used.
  • Audit logger

    • Stores prompts, tool calls, intermediate decisions, and final output.
    • Required for model governance and later review by compliance teams.
  • Human approval gate

    • Routes borderline cases to an underwriter or trustee operations team.
    • Prevents automatic approval on high-risk or incomplete cases.

Implementation

1. Install AutoGen and define the underwriting payload

Use the TypeScript AutoGen package and keep your input contract tight. Pension workflows fail when you let unstructured text drift into the decision layer.

npm install @autogen/core zod
import { z } from "zod";

export const PensionUnderwritingInputSchema = z.object({
  fundId: z.string(),
  employerName: z.string(),
  jurisdiction: z.string(),
  planType: z.enum(["defined_benefit", "defined_contribution", "hybrid"]),
  annualContribution: z.number().nonnegative(),
  fundedStatus: z.number().min(0).max(200),
  arrearsMonths: z.number().int().nonnegative(),
  sanctionsHit: z.boolean(),
  residencyRegion: z.string(),
});

export type PensionUnderwritingInput = z.infer<typeof PensionUnderwritingInputSchema>;

2. Create agents with clear responsibilities

For this pattern, use AssistantAgent instances with narrow prompts. Keep one agent focused on risk analysis and one on compliance review.

import { AssistantAgent } from "@autogen/core";

export const riskAnalyst = new AssistantAgent({
  name: "risk_analyst",
  systemMessage:
    "You assess pension underwriting risk using only provided inputs and policy context. Return concise risk factors and a recommendation.",
});

export const complianceReviewer = new AssistantAgent({
  name: "compliance_reviewer",
  systemMessage:
    "You check pension underwriting outputs for policy breaches, missing evidence, sanctions concerns, residency issues, and audit gaps.",
});

export const reportWriter = new AssistantAgent({
  name: "report_writer",
  systemMessage:
    "You produce the final underwriting memo for a pension fund. Include decision, rationale, evidence used, and escalation flags.",
});

3. Run a controlled multi-agent conversation

The practical pattern is: validate input, send it to the risk agent first, pass that result to compliance review, then generate the final memo. Use initiateChat so each step is explicit in your logs.

import { AssistantAgent } from "@autogen/core";
import { PensionUnderwritingInputSchema } from "./schema";

async function runUnderwriting(rawInput: unknown) {
  const input = PensionUnderwritingInputSchema.parse(rawInput);

  const riskAnalyst = new AssistantAgent({
    name: "risk_analyst",
    systemMessage:
      "Assess pension underwriting risk using only provided inputs. Focus on arrears, funded status, sanctions, jurisdiction mismatch, and contribution stability.",
  });

  const complianceReviewer = new AssistantAgent({
    name: "compliance_reviewer",
    systemMessage:
      "Review underwriting findings for compliance issues. Flag missing audit evidence, data residency concerns, sanctions exposure, or any reason to escalate.",
  });

  const reportWriter = new AssistantAgent({
    name: "report_writer",
    systemMessage:
      "Write a final underwriting memo for a pension fund with decision options: approve, approve_with_conditions, escalate. Cite the evidence explicitly.",
  });

  const riskResult = await riskAnalyst.initiateChat(
    `Evaluate this pension case:\n${JSON.stringify(input)}`
  );

  const complianceResult = await complianceReviewer.initiateChat(
    `Review this risk assessment for compliance:\n${riskResult}`
  );

  const finalMemo = await reportWriter.initiateChat(
    `Draft the final memo using this case data:\n${JSON.stringify(input)}\n\nRisk assessment:\n${riskResult}\n\nCompliance review:\n${complianceResult}`
  );

  return {
    input,
    riskResult,
    complianceResult,
    finalMemo,
  };
}

4. Add an approval threshold before any automated action

Pension funds should not let an LLM silently approve borderline cases. If arrears are high or funded status is weak, route to human review.

function needsHumanReview(input: PensionUnderwritingInput) {
  return (
    input.sanctionsHit ||
    input.arrearsMonths >= 3 ||
    input.fundedStatus < 80 ||
    input.residencyRegion !== input.jurisdiction
);
}

In production you wire this to your workflow engine so the agent can recommend a decision but never execute it alone when thresholds are breached.

Production Considerations

  • Audit everything

    • Persist raw inputs, normalized payloads, model prompts, tool outputs, and final decisions.
    • For pension funds this is non-negotiable because trustees and regulators will ask why a case was approved or escalated.
  • Keep data residency explicit

    • Pin inference to an approved region that matches your fund’s legal requirements.
    • Do not send member PII or employer financials to endpoints outside approved jurisdictions.
  • Use deterministic guardrails

    • Enforce schema validation before the model sees anything.
    • Reject unsupported outputs like free-form approvals without evidence fields.
  • Route uncertain cases to humans

    • Anything involving sanctions hits, weak funding ratios, or cross-border residency should go to manual review.
    • The agent should draft recommendations; it should not be the final authority on high-risk pensions cases.

Common Pitfalls

  1. Letting the model infer missing facts

    • If funded status or jurisdiction is absent, do not ask the model to guess.
    • Fail closed with a human escalation instead of producing confident nonsense.
  2. Mixing policy text with raw member data in one blob

    • Separate retrieval context from application data.
    • This makes audits cleaner and reduces accidental leakage of sensitive pension records into prompts.
  3. Skipping decision traceability

    • A recommendation without cited inputs is useless in regulated environments.
    • Store which fields drove the outcome so compliance can reconstruct the path later.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides