How to Build a compliance checking Agent Using AutoGen in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingautogentypescriptpension-funds

A compliance checking agent for pension funds reviews member communications, investment actions, contribution flows, and policy documents against rules that matter in regulated retirement operations. It catches breaches early, creates an audit trail, and reduces the manual load on compliance teams that still need deterministic evidence for every decision.

Architecture

Build this agent as a small set of explicit components, not one giant prompt:

  • Input normalizer

    • Takes pension fund documents, transaction metadata, and policy text.
    • Converts them into a stable schema before any LLM call.
  • Rules retrieval layer

    • Pulls the relevant internal policy clauses, trustee rules, and jurisdiction-specific pension regulations.
    • Keeps the agent grounded in source material instead of free-form reasoning.
  • AutoGen agent pair

    • A AssistantAgent does the compliance analysis.
    • A UserProxyAgent orchestrates execution and captures results for downstream systems.
  • Deterministic validator

    • Checks the model output against required fields like risk level, violated clause, and evidence.
    • Rejects incomplete responses before they reach reviewers.
  • Audit logger

    • Stores prompts, retrieved clauses, model outputs, timestamps, and document hashes.
    • This is non-negotiable for pension funds.
  • Policy action router

    • Sends high-risk cases to human compliance officers.
    • Auto-closes low-risk cases only when confidence and rule coverage are sufficient.

Implementation

1) Install AutoGen for TypeScript and define your compliance input

Use a strict schema from the start. Pension compliance work breaks quickly when inputs are loosely shaped.

npm install @autogenai/autogen openai zod
import { z } from "zod";

export const ComplianceCaseSchema = z.object({
  caseId: z.string(),
  jurisdiction: z.enum(["UK", "EU", "US", "AU"]),
  documentType: z.enum(["member_letter", "investment_instruction", "benefit_statement", "policy"]),
  content: z.string(),
  policyClauses: z.array(z.string()),
});

export type ComplianceCase = z.infer<typeof ComplianceCaseSchema>;

This gives you a stable contract for member communications, trustee policies, and investment instructions.

2) Create the AutoGen agents with a compliance-focused system message

The key pattern is simple: one agent reasons over the case, another executes the chat and collects output. Keep the assistant constrained to cite clauses and avoid guessing.

import { OpenAIChatCompletionClient } from "@autogenai/autogen";
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const modelClient = new OpenAIChatCompletionClient({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});

const complianceAgent = new AssistantAgent({
  name: "pension_compliance_agent",
  modelClient,
  systemMessage: `
You are a pension fund compliance analyst.
Only use the provided content and policy clauses.
Return JSON with:
- decision: COMPLIANT | NON_COMPLIANT | NEEDS_REVIEW
- riskLevel: LOW | MEDIUM | HIGH
- violatedClauses: string[]
- rationale: string
- evidence: string[]
Do not invent regulations. Cite only supplied clauses.
`,
});

const orchestrator = new UserProxyAgent({
  name: "compliance_orchestrator",
});

This is where pension-fund-specific guardrails start. If you let the model improvise regulatory language, you will create audit problems immediately.

3) Run a structured compliance review and validate the response

Use initiateChat to run the analysis. Then validate the returned JSON before storing or routing it.

import { ComplianceCaseSchema } from "./schema";

async function reviewCase(input: unknown) {
  const caseData = ComplianceCaseSchema.parse(input);

  const prompt = `
Review this pension fund case for compliance.

Case ID: ${caseData.caseId}
Jurisdiction: ${caseData.jurisdiction}
Document Type: ${caseData.documentType}

Content:
${caseData.content}

Policy Clauses:
${caseData.policyClauses.map((c, i) => `${i + 1}. ${c}`).join("\n")}

Return only valid JSON matching the required schema.
`;

  const result = await orchestrator.initiateChat(complianceAgent, prompt);

  const text =
    typeof result === "string"
      ? result
      : JSON.stringify(result);

  const parsed = JSON.parse(text);

  return {
    caseId: caseData.caseId,
    review: parsed,
    reviewedAt: new Date().toISOString(),
  };
}

In production, I would wrap JSON.parse in retry logic because LLMs occasionally emit extra text. But keep the contract strict; don’t accept partial objects for regulated workflows.

4) Route outcomes to human review or downstream automation

Pension funds need deterministic escalation. A “maybe” from the model should almost always become a human task if member benefits or contribution handling are involved.

function routeOutcome(review: any) {
  if (review.decision === "NON_COMPLIANT" || review.riskLevel === "HIGH") {
    return {
      action: "ESCALATE_TO_HUMAN",
      queue: "pension-compliance",
      reason: review.rationale,
    };
  }

  if (review.decision === "NEEDS_REVIEW") {
    return {
      action: "REVIEW_REQUIRED",
      queue: "pension-compliance-triage",
      reason: review.rationale,
    };
    }

  return {
    action: "AUTO_CLOSE",
    queue: null,
    reason: review.rationale,
  };
}

That routing layer is where you protect members and trustees. Never auto-close cases that touch benefit calculations, transfer requests, or contribution exceptions without explicit policy coverage.

Production Considerations

  • Deploy in-region

    • Keep data residency aligned with your pension fund’s legal obligations.
    • If member data must stay in-country or in-region, your model endpoint, vector store, logs, and backups must follow that constraint too.
  • Store full audit traces

    • Persist input hashes, retrieved clauses, prompt text, response JSON, and routing decisions.
    • Auditors will ask why a case was marked compliant six months later.
  • Add hard guardrails

    • Block outputs missing decision, riskLevel, or violatedClauses.
    • Reject any response that cites regulations not present in retrieved policy text unless your pipeline explicitly injects them.
  • Monitor false negatives by case type

    • Track missed issues separately for contribution processing, transfers out, retirements-in-payment phase rules, and marketing/member comms.
    • Pension compliance failures are not uniform; each workflow has different blast radius.

Common Pitfalls

  1. Using free-text outputs

    • Problem: The model returns a nice explanation but no machine-readable decision.
    • Fix: Force JSON output with required fields and reject anything else.
  2. Mixing jurisdictions in one prompt

    • Problem: UK auto-enrolment rules get blended with EU or AU concepts.
    • Fix: Pass exactly one jurisdiction per run and retrieve only matching clauses.
  3. Skipping human escalation thresholds

    • Problem: The agent auto-closes cases that should be reviewed by compliance staff.
    • Fix: Escalate anything high-risk or ambiguous by default; optimize for recall over automation rate in regulated pension workflows.
  4. Ignoring data residency in logs

    • Problem: You keep prompts and member data in an external logging service outside approved regions.
    • Fix: Treat logs as regulated data. Apply the same residency controls as your source systems.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides