How to Build a policy Q&A Agent Using AutoGen in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
policy-q-aautogentypescriptpension-fundspolicy-qanda

A policy Q&A agent for pension funds answers staff and member questions against approved policy documents, scheme rules, and operational procedures. It matters because pension operations are full of edge cases: eligibility, contribution limits, transfer rules, retirement options, and disclosure requirements all need consistent answers with an audit trail.

Architecture

  • TypeScript API layer

    • Exposes a /ask-policy endpoint for internal tools or member portals.
    • Handles authentication, request validation, and rate limiting.
  • Document retrieval layer

    • Pulls from approved policy sources only: scheme rules, trustee minutes, admin manuals, benefit guides.
    • Uses chunking and metadata like documentType, effectiveDate, and jurisdiction.
  • AutoGen agent runtime

    • A single AssistantAgent handles the Q&A flow.
    • A UserProxyAgent drives execution and can be configured to avoid code execution for safety.
  • Policy guardrail layer

    • Rejects questions outside the allowed scope.
    • Forces citations from retrieved documents before returning an answer.
  • Audit and logging layer

    • Stores question, retrieved chunks, model output, timestamps, and versioned policy source IDs.
    • Supports compliance review and dispute resolution.
  • Data residency controls

    • Keeps embeddings, prompts, and logs in-region where required.
    • Prevents routing member data to non-approved model endpoints.

Implementation

1) Install AutoGen and wire a TypeScript project

For TypeScript, use the AutoGen JS/TS package plus your preferred HTTP server. The pattern below assumes you are building a service that calls OpenAI through AutoGen’s OpenAIChatCompletionClient.

npm install @autogenai/autogen openai zod express
npm install -D typescript ts-node @types/express @types/node

Create a small config module for your model client:

// llm.ts
import { OpenAIChatCompletionClient } from "@autogenai/autogen";

export const llm = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY!,
});

2) Build the agents with strict system instructions

For pension funds, the assistant must not invent policy. It should answer only from supplied context and say when it cannot find support.

// agent.ts
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";
import { llm } from "./llm";

export const policyAgent = new AssistantAgent({
  name: "pension_policy_agent",
  modelClient: llm,
  systemMessage: `
You answer pension fund policy questions using only the provided context.
Rules:
- If the context does not contain the answer, say you cannot confirm it from current policy.
- Always cite document titles and effective dates when possible.
- Do not provide legal advice.
- Do not infer benefits eligibility beyond the source text.
- Keep responses concise and operationally useful.
`,
});

export const userProxy = new UserProxyAgent({
  name: "policy_requester",
  humanInputMode: "NEVER",
});

This is the core control point. In pension operations, “helpful guessing” is a compliance failure.

3) Add retrieval and run a real AutoGen conversation

The agent should receive only approved excerpts. In production you would replace the stubbed retrieval with your vector store or search index.

// askPolicy.ts
import { policyAgent } from "./agent";

type RetrievedChunk = {
  text: string;
  title: string;
  effectiveDate: string;
};

function buildContext(chunks: RetrievedChunk[]): string {
  return chunks
    .map(
      (c, i) =>
        `[${i + 1}] Title: ${c.title}\nEffective Date: ${c.effectiveDate}\nExcerpt: ${c.text}`
    )
    .join("\n\n");
}

export async function askPolicy(question: string) {
  const retrievedChunks: RetrievedChunk[] = [
    {
      title: "Defined Contribution Scheme Rules",
      effectiveDate: "2025-01-01",
      text: "Members may transfer out subject to trustee approval and completion of identity checks.",
    },
    {
      title: "Member Communications Guide",
      effectiveDate: "2024-10-15",
      text: "All retirement illustrations must include assumptions disclaimer and cannot guarantee future returns.",
    },
  ];

  const context = buildContext(retrievedChunks);

  const result = await policyAgent.run([
    {
      role: "user",
      content: `Question: ${question}\n\nApproved context:\n${context}`,
    },
  ]);

  return result.messages.at(-1)?.content ?? "";
}

That run() call is the actual AutoGen interaction. The important part is that the prompt contains only approved excerpts; do not let the model roam across raw file shares or uncurated PDFs.

4) Expose it through an API endpoint with validation

Keep the endpoint narrow. Pension fund users usually need deterministic behavior more than conversational flexibility.

// server.ts
import express from "express";
import { z } from "zod";
import { askPolicy } from "./askPolicy";

const app = express();
app.use(express.json());

const bodySchema = z.object({
  question: z.string().min(10).max(1000),
});

app.post("/ask-policy", async (req, res) => {
  const parsed = bodySchema.safeParse(req.body);
  if (!parsed.success) {
    return res.status(400).json({ error: "Invalid question" });
  }

  const answer = await askPolicy(parsed.data.question);

  res.json({
    answer,
    sourceSystem: "approved-policy-index",
    timestamp: new Date().toISOString(),
  });
});

app.listen(3000);

If you want stronger guardrails, add a pre-check before calling AutoGen:

  • block questions about personal financial advice,
  • block requests for protected member data,
  • route ambiguous queries to human review.

Production Considerations

  • Compliance logging

    • Store question text, retrieved document IDs, response text, model version, and timestamp.
    • Make logs immutable enough for audit review but redact member identifiers where possible.
  • Data residency

    • Keep embeddings, vector indexes, prompt logs, and backups in approved regions.
    • Verify that your LLM endpoint does not move data outside jurisdictional boundaries required by trustees or regulators.
  • Guardrails

    • Add a classifier for advice vs policy lookup so members do not get personal recommendations.
    • Return “cannot confirm from current policy” instead of generating an answer when retrieval confidence is low.
  • Monitoring

    • Track citation coverage rate, refusal rate, retrieval hit rate, and escalation-to-human rate.
    • Alert on answers without supporting sources or on spikes in questions about sensitive topics like transfers or death benefits.

Common Pitfalls

  1. Letting the model answer without sources

    • This is how hallucinations enter pension workflows.
    • Fix it by requiring retrieved context in every prompt and rejecting responses that lack citations.
  2. Mixing policy content with personal member data

    • Pension systems often contain regulated personal information.
    • Keep this agent on document-only inputs unless you have explicit controls for identity verification and data minimization.
  3. Using stale scheme documents

    • Pension rules change often through trustee decisions and legislative updates.
    • Version documents by effective date and make retrieval prefer the latest approved source for each jurisdiction or scheme section.
  4. Skipping human escalation paths

    • Some questions need trustee interpretation or legal review.
    • Build a clear fallback to an operations queue when the agent cannot confirm an answer from current policy.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides