How to Build a underwriting Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
underwritingautogentypescriptwealth-management

An underwriting agent for wealth management takes client documents, portfolio data, risk questionnaires, and policy rules, then produces a structured underwriting recommendation with clear rationale. It matters because advisory firms need faster decisions without losing control over compliance, auditability, and suitability checks.

Architecture

  • Input ingestion layer

    • Pulls KYC/AML data, investment profile forms, account history, and supporting documents.
    • Normalizes everything into a single underwriting payload.
  • Policy/rules engine

    • Encodes firm-specific underwriting thresholds.
    • Handles hard stops like missing disclosures, sanctions hits, or residency constraints.
  • AutoGen agent layer

    • Uses a primary AssistantAgent to analyze the case.
    • Uses a secondary reviewer agent to challenge the recommendation before it is returned.
  • Tooling layer

    • Exposes functions for fetching client facts, checking policy rules, and writing audit logs.
    • Keeps the LLM from inventing facts.
  • Audit and evidence store

    • Persists prompts, tool calls, outputs, and final decisions.
    • Required for compliance review and post-trade or suitability audits.
  • Human approval workflow

    • Routes borderline or high-risk cases to a licensed reviewer.
    • Prevents fully automated approvals where policy requires sign-off.

Implementation

  1. Install AutoGen and define your underwriting data model

    For TypeScript, keep the domain object strict. Underwriting in wealth management is not free-form chat; it is structured decisioning with traceable inputs.

    npm install @autogenai/autogen zod
    
    import { z } from "zod";
    
    const UnderwritingCaseSchema = z.object({
      clientId: z.string(),
      jurisdiction: z.string(),
      riskTolerance: z.enum(["conservative", "moderate", "aggressive"]),
      liquidityNeedMonths: z.number().int().min(0),
      concentrationPct: z.number().min(0).max(100),
      sanctionsHit: z.boolean(),
      missingKycDocs: z.array(z.string()),
    });
    
    export type UnderwritingCase = z.infer<typeof UnderwritingCaseSchema>;
    
  2. Create tools for facts, policy checks, and audit logging

    The agent should not infer client facts from memory. Give it tools that return deterministic results from your internal systems.

    import { AssistantAgent } from "@autogenai/autogen";
    
    async function getClientFacts(clientId: string) {
      return {
        clientId,
        netWorthBand: "high",
        approvedProducts: ["managed_portfolio", "municipal_bonds"],
        residency: "US",
      };
    }
    
    async function checkUnderwritingPolicy(input: {
      concentrationPct: number;
      sanctionsHit: boolean;
      missingKycDocs: string[];
    }) {
      if (input.sanctionsHit) return { allowed: false, reason: "Sanctions hit" };
      if (input.missingKycDocs.length > 0) return { allowed: false, reason: "Missing KYC docs" };
      if (input.concentrationPct > 35) return { allowed: false, reason: "Concentration above limit" };
      return { allowed: true, reason: "Pass" };
    }
    
    async function writeAuditEvent(event: unknown) {
      console.log(JSON.stringify(event));
    }
    
  3. Build the AutoGen agents and wire in tool calls

    Use one agent to produce the recommendation and another to review it. This pattern works better than a single-agent answer because wealth management decisions need a second set of eyes.

    import { AssistantAgent } from "@autogenai/autogen";
    
    const underwriter = new AssistantAgent({
      name: "underwriter",
      modelClientOptions: {
        model: "gpt-4o-mini",
        apiKey: process.env.OPENAI_API_KEY!,
      },
      systemMessage:
        [
          "You are an underwriting analyst for a wealth management firm.",
          "Use only provided facts and tool outputs.",
          "Return JSON with fields: decision, rationale, risks, escalationRequired.",
          "Never approve if sanctionsHit is true or required KYC docs are missing.",
          "Flag any case with jurisdiction or suitability ambiguity for human review.",
        ].join(" "),
    });
    
    const reviewer = new AssistantAgent({
      name: "reviewer",
      modelClientOptions: {
        model: "gpt-4o-mini",
        apiKey: process.env.OPENAI_API_KEY!,
      },
      systemMessage:
        [
          "You are a compliance reviewer.",
          "Challenge unsupported assumptions.",
          "Reject any recommendation that conflicts with policy or audit requirements.",
          "Return JSON with fields: approveReview, issuesFound.",
        ].join(" "),
    });
    
    export async function runUnderwriting(caseInputRaw: unknown) {
      const caseInput = UnderwritingCaseSchema.parse(caseInputRaw);
      const facts = await getClientFacts(caseInput.clientId);
      const policy = await checkUnderwritingPolicy({
        concentrationPct: caseInput.concentrationPct,
        sanctionsHit: caseInput.sanctionsHit,
        missingKycDocs: caseInput.missingKycDocs,
      });
    
      const prompt = `
    

Client facts: ${JSON.stringify(facts)}

Case input: ${JSON.stringify(caseInput)}

Policy result: ${JSON.stringify(policy)}

Make an underwriting recommendation. `;

 const draft = await underwriter.run([{ role: "user", content: prompt }]);
 const review = await reviewer.run([
   { role: "user", content: `Review this draft for compliance issues:\n${draft.output}` },
 ]);

 await writeAuditEvent({
   clientId: caseInput.clientId,
   facts,
   policy,
   draftOutput: draft.output,
   reviewOutput: review.output,
   timestamp: new Date().toISOString(),
 });

 return {
   draftDecisionText:.draft?.output ?? draft.output,
   reviewText:.output ?? review.output,
   policyAllowedResult:: policy.allowed,
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   

  };

}


4. **Return a structured decision and route exceptions to humans**

In production, do not expose raw model text as the final result. Convert the output into a controlled decision object and require manual approval when confidence is low or policy flags appear.

## Production Considerations

- **Deployment**
- Keep the agent behind an internal service boundary.
- Pin model versions and use region-specific endpoints where data residency matters.
- Store prompts and outputs in an immutable audit log.

- **Monitoring**
- Track approval rate, escalation rate, tool failure rate, and policy override frequency.
- Alert on cases where the agent recommends approval despite policy friction.
- Sample outputs weekly for compliance review.

- **Guardrails**
- Enforce hard rules outside the LLM for sanctions, missing KYC/AML artifacts, residency restrictions, and concentration limits.
- Require human sign-off for high-net-worth clients in restricted jurisdictions or complex product mixes.
- Redact PII before sending data to non-essential tools.

- **Auditability**
- Persist every tool call with timestamps and correlation IDs.
- Keep the exact prompt context used for each decision.
- Make sure reviewers can reconstruct why a case was approved or escalated.

## Common Pitfalls

- **Letting the model decide on raw text alone**
- Avoid this by validating input with `zod` and using deterministic policy checks before the LLM sees the case.

- **Skipping human review on borderline cases**
- Avoid this by routing ambiguous jurisdictions, missing documentation, or high-concentration portfolios to an advisor or compliance officer.

- **Ignoring data residency constraints**
- Avoid this by keeping client data in approved regions and using deployment targets that match your regulatory footprint.

---

## Keep learning

- [The complete AI Agents Roadmap](/blog/ai-agents-roadmap-2026) — my full 8-step breakdown
- [Free: The AI Agent Starter Kit](/starter-kit) — PDF checklist + starter code
- [Work with me](/contact) — I build AI for banks and insurance companies

*By Cyprian Aarons, AI Consultant at [Topiax](https://topiax.xyz).*

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides