How to Build a policy Q&A Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
policy-q-aautogentypescriptwealth-managementpolicy-qanda

A policy Q&A agent for wealth management answers questions about internal policies, product rules, suitability constraints, and client handling procedures. It matters because advisors and operations teams need fast answers without guessing, while the firm still has to preserve compliance, auditability, and jurisdiction-specific controls.

Architecture

  • User interface layer
    • A chat UI or internal tool where advisors ask questions like “Can this client be offered discretionary management?” or “What’s the policy for margin on retirement accounts?”
  • Policy retrieval layer
    • A retrieval component that pulls from approved sources only: compliance manuals, product sheets, escalation runbooks, and jurisdictional policy docs.
  • AutoGen agent layer
    • A AssistantAgent that reasons over the retrieved context and produces a grounded answer.
  • Tooling layer
    • Tools for document lookup, policy citation extraction, and escalation routing when the answer is ambiguous or restricted.
  • Audit and logging layer
    • Structured logs capturing the question, retrieved sources, model output, citations, and final disposition for review.
  • Guardrail layer
    • Hard checks for PII leakage, unsupported advice, residency restrictions, and mandatory escalation triggers.

Implementation

1) Install AutoGen for TypeScript and define your policy tools

Use AutoGen’s TypeScript package and expose only approved retrieval functions. For wealth management, your tool boundary matters more than model choice because you need deterministic access to controlled policy content.

npm install @autogenai/autogen openai zod
import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";

const openaiClient = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

type PolicyHit = {
  id: string;
  title: string;
  source: string;
  excerpt: string;
};

const POLICY_DOCS: PolicyHit[] = [
  {
    id: "wm-001",
    title: "Suitability Policy",
    source: "compliance/suitability.md",
    excerpt:
      "Advisors must not recommend products unless they are suitable for the client's risk profile, liquidity needs, tax status, and investment objectives.",
  },
  {
    id: "wm-002",
    title: "Margin Restrictions",
    source: "trading/margin.md",
    excerpt:
      "Margin is prohibited in retirement accounts unless explicitly approved by Compliance and permitted by local regulation.",
  },
];

async function searchPolicy(query: string): Promise<PolicyHit[]> {
  const q = query.toLowerCase();
  return POLICY_DOCS.filter(
    (doc) =>
      doc.title.toLowerCase().includes(q) ||
      doc.excerpt.toLowerCase().includes(q)
  );
}

2) Create an AssistantAgent with strict system instructions

The agent should answer only from retrieved policy context. In wealth management, that means no speculation, no product recommendation outside policy text, and explicit escalation when the policy is unclear.

const policyAgent = new AssistantAgent({
  name: "policy_qna_agent",
  modelClient: openaiClient,
  systemMessage: `
You are a policy Q&A assistant for a wealth management firm.

Rules:
- Answer only using provided policy context.
- If the context is insufficient, say you cannot determine the answer and recommend escalation.
- Do not provide investment advice or product recommendations.
- Include source references in every answer.
- Flag compliance-sensitive cases involving suitability, retirement accounts, margin, KYC/AML, tax status, or cross-border restrictions.
`.trim(),
});

3) Wire retrieval into the prompt before calling run

A practical pattern is retrieve-first, then ask the agent to summarize with citations. This keeps the model inside a narrow factual envelope.

async function answerPolicyQuestion(question: string) {
  const hits = await searchPolicy(question);

  const context =
    hits.length > 0
      ? hits
          .map(
            (hit) =>
              `SOURCE: ${hit.title} (${hit.source})\nID: ${hit.id}\nEXCERPT: ${hit.excerpt}`
          )
          .join("\n\n")
      : "No relevant policy documents found.";

  const prompt = `
Question:
${question}

Policy context:
${context}

Instructions:
- Give a concise answer.
- Cite any source IDs used.
- If there is no clear match in context, say escalation is required.
`.trim();

  const result = await policyAgent.run(prompt);
  return result;
}

4) Add an audit trail around every response

Wealth management teams need replayable decisions. Store input question, matched sources, output text, timestamp, user identity, and any escalation flag in an immutable log or SIEM target.

async function handleQuestion(question: string, userId: string) {
  const startedAt = new Date().toISOString();
  const result = await answerPolicyQuestion(question);

  const auditRecord = {
    userId,
    question,
    startedAt,
    finishedAt: new Date().toISOString(),
    response: String(result),
    dataResidencyRegion: process.env.DATA_REGION ?? "unknown",
    channel: "advisor-assist",
  };

  console.log(JSON.stringify(auditRecord));
  return result;
}

handleQuestion(
  "Can a retirement account use margin if approved by local compliance?",
  "advisor_1042"
).then(console.log);

Production Considerations

  • Deploy close to your data boundary
    • Keep retrieval indexes and logs in-region if your firm has data residency requirements. For cross-border teams, separate policies by jurisdiction instead of using one global corpus.
  • Monitor for compliance drift
    • Track unanswered questions, escalation rates, citations used per response, and prompts that trigger “cannot determine.” A spike usually means stale policies or weak retrieval coverage.
  • Add hard guardrails before generation
    • Block PII-heavy prompts unless the caller is authorized. Redact account numbers, tax IDs, and full client identifiers before sending text to the model.
  • Keep humans in the loop for edge cases
    • Anything touching suitability exceptions, retirement-account trading rules, AML concerns, or regulatory interpretation should route to compliance review rather than auto-answering.

Common Pitfalls

  1. Letting the model answer without retrieved policy context

    • This turns your agent into a general chatbot. Fix it by making retrieval mandatory and refusing to answer when no relevant source is found.
  2. Mixing jurisdictions in one knowledge base

    • Wealth management policies vary by country and entity. Split corpora by region and include jurisdiction metadata in every document hit.
  3. Logging raw client data into prompts or traces

    • That creates unnecessary privacy risk. Redact sensitive fields before inference and store only what audit actually needs.
  4. Treating citations as decoration

    • If answers don’t cite source IDs or document paths consistently then reviewers cannot validate them. Make citations part of your response contract and reject uncited outputs in downstream workflows.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides