How to Build a policy Q&A Agent Using CrewAI in TypeScript for wealth management
A policy Q&A agent for wealth management answers internal policy questions like “Can this client receive discretionary advice?” or “What’s the escalation path for a politically exposed person?” It matters because advisors and operations teams need fast, consistent answers without guessing, and every response has to stay inside compliance boundaries, be auditable, and respect data residency rules.
Architecture
- •
Policy knowledge source
- •Internal policy docs, suitability rules, KYC/AML procedures, product constraints, and regional regulations.
- •Store these in a retrievable format with metadata like jurisdiction, effective date, and document owner.
- •
Retriever tool
- •Pulls the most relevant policy snippets before the agent answers.
- •Must filter by region and business line so a UK private banking policy doesn’t bleed into an APAC response.
- •
Policy analyst agent
- •Reads the retrieved context and produces a concise answer with citations.
- •Should never invent policy; if the evidence is weak, it should escalate.
- •
Compliance checker
- •Validates the answer for prohibited advice, missing disclaimers, or unsupported claims.
- •In wealth management, this is where you catch language that looks like personalized investment advice.
- •
Audit logger
- •Persists question, retrieved sources, final answer, timestamps, user identity, and model version.
- •This is non-negotiable for supervision and post-trade review workflows.
- •
Access control layer
- •Enforces role-based access to policies by desk, region, and advisor seniority.
- •Some policies are internal-only and should not be exposed across teams.
Implementation
1) Install CrewAI for TypeScript and define your policy tools
You want the agent to retrieve policy text from your own system rather than rely on model memory. In TypeScript with CrewAI, define tools for search and audit logging first.
import { Agent } from "@crewai/core";
import { Task } from "@crewai/core";
import { Crew } from "@crewai/core";
import { Tool } from "@crewai/core";
type PolicyHit = {
id: string;
title: string;
jurisdiction: string;
content: string;
};
const searchPolicyTool = new Tool({
name: "search_policy",
description: "Search approved wealth management policy documents by query and jurisdiction.",
func: async (input: string) => {
const payload = JSON.parse(input) as { query: string; jurisdiction: string };
const results: PolicyHit[] = await fetch(
`https://policy-api.internal/search?q=${encodeURIComponent(payload.query)}&jurisdiction=${payload.jurisdiction}`
).then((r) => r.json());
return JSON.stringify(results.slice(0, 5));
},
});
const auditTool = new Tool({
name: "audit_log",
description: "Write an immutable audit event for policy Q&A.",
func: async (input: string) => {
await fetch("https://audit-api.internal/events", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: input,
});
return "ok";
},
});
2) Create a policy agent with strict instructions
The key is to constrain the agent to summarize policy only. For wealth management, tell it to cite sources, avoid personalized advice, and escalate ambiguity.
const policyAgent = new Agent({
role: "Wealth Management Policy Analyst",
goal: "Answer internal policy questions using only approved policy sources.",
backstory:
"You support advisors and operations teams in a regulated wealth management environment.",
verbose: true,
allowDelegation: false,
tools: [searchPolicyTool, auditTool],
});
3) Build the task with compliance-safe output requirements
Make the task explicit about what good output looks like. The model should return a direct answer plus citations and an escalation flag when needed.
const qnaTask = new Task({
description: `
Answer the user's question using only approved policies.
Question: {question}
Jurisdiction: {jurisdiction}
Rules:
- Use search_policy first.
- Cite policy IDs in the answer.
- If no clear rule exists, say "Escalate to Compliance".
- Do not provide investment recommendations.
- Do not infer beyond source text.
`,
expectedOutput:
"A short answer with citations, confidence level, and escalation guidance.",
});
4) Run the crew and persist the result
This pattern gives you one controlled agent flow. After execution, write both the prompt context and final response into your audit store.
async function answerPolicyQuestion(question: string, jurisdiction: string) {
const crew = new Crew({
agents: [policyAgent],
tasks: [qnaTask],
verbose: true,
});
const result = await crew.kickoff({
inputs: { question, jurisdiction },
});
await auditTool.func(
JSON.stringify({
eventType: "policy_qna",
question,
jurisdiction,
answer: String(result),
timestamp: new Date().toISOString(),
modelVersion: "crewai-ts-policy-agent-v1",
})
);
return result;
}
answerPolicyQuestion(
"Can an advisor recommend leveraged ETFs to a discretionary client?",
"UK"
).then(console.log);
Production Considerations
- •
Deploy in-region
- •Keep retrieval, inference gateways, logs, and vector stores inside the required data residency boundary.
- •For wealth management clients in regulated jurisdictions, cross-border prompt routing can become a legal issue fast.
- •
Add deterministic guardrails before response delivery
- •Run a rules engine after CrewAI returns text.
- •Block phrases that look like personalized investment advice unless they are explicitly allowed by policy.
- •
Log everything needed for supervision
| What to log | Why it matters |
|---|---|
| User identity / role | Prove access was appropriate |
| Question text | Reconstruct intent |
| Retrieved policies | Show evidence basis |
| Final answer | Audit supervision |
| Model version | Reproduce behavior |
- •Monitor drift by jurisdiction
| Signal | What it tells you |
|---|---|
| High escalation rate | Policy gaps or weak retrieval |
| Low citation coverage | Agent is answering without evidence |
| Region mismatch hits | Retrieval filtering is broken |
| Repeated compliance rejections | Prompt or tool design needs tightening |
Common Pitfalls
- •Letting the agent answer from general knowledge
If you don’t force retrieval-first behavior, it will fill gaps with plausible but wrong statements. Fix this by requiring search_policy before any final response and rejecting uncited answers.
- •Ignoring jurisdiction filters
Wealth management policies vary by country, booking center, and client segment. Always pass jurisdiction into retrieval and validate that returned documents match it before generating an answer.
- •Skipping auditability
If you only store the final answer, you lose the evidence chain. Log the original question, retrieved sources, tool calls, timestamps, user role, and model version so compliance can replay decisions later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit