How to Build a compliance checking Agent Using AutoGen in Python for wealth management
A compliance checking agent for wealth management reviews client-facing text, trade instructions, suitability notes, and advisor communications against firm policy and regulatory rules before anything is sent or executed. It matters because a bad recommendation, missing disclosure, or unsuitable product suggestion can create regulatory exposure, client harm, and audit findings fast.
Architecture
- •
Input normalizer
- •Takes raw advisor text, CRM notes, or proposed client messages.
- •Strips formatting and structures the content into a consistent payload.
- •
Policy retrieval layer
- •Pulls the right compliance rules for the jurisdiction, client segment, product type, and channel.
- •This is where you enforce region-specific rules like SEC/FINRA constraints or local data residency requirements.
- •
Compliance reasoning agent
- •Uses an LLM to classify risk, detect missing disclosures, and flag suitability issues.
- •Produces structured findings with severity and rationale.
- •
Audit logger
- •Stores inputs, outputs, timestamps, policy version, and reviewer decisions.
- •Needed for supervision, model governance, and post-incident review.
- •
Human escalation path
- •Routes high-risk cases to a compliance officer instead of auto-approving.
- •Required for wealth management workflows where false negatives are expensive.
- •
Approval gate
- •Returns one of:
approve,reject, orescalate. - •Keeps the agent useful without letting it become the final authority on regulated decisions.
- •Returns one of:
Implementation
1) Install AutoGen and define the compliance schema
For this pattern, use AutoGen’s AssistantAgent plus a small wrapper that forces structured output. In production you want deterministic fields like decision, issues, and required_actions, not free-form prose.
from pydantic import BaseModel
from typing import List, Literal
class ComplianceFinding(BaseModel):
rule_id: str
severity: Literal["low", "medium", "high"]
issue: str
remediation: str
class ComplianceResult(BaseModel):
decision: Literal["approve", "reject", "escalate"]
summary: str
findings: List[ComplianceFinding]
2) Create the compliance agent with AutoGen
This example uses autogen.AssistantAgent. The key point is to constrain behavior in the system message and keep the agent focused on review only.
import autogen
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
],
"temperature": 0,
}
compliance_agent = autogen.AssistantAgent(
name="compliance_agent",
llm_config=llm_config,
system_message=(
"You are a wealth management compliance reviewer. "
"Check client-facing text for suitability issues, missing disclosures, "
"prohibited promises, misleading performance claims, and policy violations. "
"Return concise findings with a final decision: approve, reject, or escalate."
),
)
3) Send a review request and parse the result
AutoGen’s generate_reply() returns the model response. In practice you wrap that response in validation logic so your downstream workflow only accepts structured output.
import json
import os
import autogen
review_request = """
Review this draft message:
'We recommend moving 40% of your portfolio into private credit.
This strategy will outperform public markets and is suitable for all clients.
Past performance guarantees future returns.'
"""
reply = compliance_agent.generate_reply(messages=[{"role": "user", "content": review_request}])
print(reply)
For production use, pair that with a strict parser:
def parse_compliance_result(raw_text: str) -> ComplianceResult:
data = json.loads(raw_text)
return ComplianceResult.model_validate(data)
If you want stronger control over the interaction loop, use UserProxyAgent to orchestrate handoff between your app and the reviewer.
user_proxy = autogen.UserProxyAgent(
name="workflow_orchestrator",
human_input_mode="NEVER",
)
result = user_proxy.initiate_chat(
compliance_agent,
message=review_request,
)
4) Add routing logic for approve/reject/escalate
The real pattern is not “LLM says yes/no.” It is “LLM classifies risk; workflow enforces policy.”
def route_decision(result: ComplianceResult):
if result.decision == "approve":
return {"status": "approved"}
if result.decision == "reject":
return {"status": "blocked", "reason": result.summary}
return {"status": "needs_human_review", "findings": [f.model_dump() for f in result.findings]}
A practical wealth management rule set might look like this:
| Rule type | Example trigger | Action |
|---|---|---|
| Suitability | Product not aligned to client risk profile | Escalate |
| Performance claims | “Guaranteed outperformance” | Reject |
| Disclosure | Missing fee/risk language | Reject |
| Jurisdiction | Data sent outside approved region | Escalate |
| Recordkeeping | Missing audit metadata | Block until fixed |
Production Considerations
- •
Keep policy data versioned
- •Store every rule bundle with a version ID.
- •When compliance asks why something passed last quarter but fails now, you need an exact policy snapshot.
- •
Log everything required for audit
- •Persist prompt input, model output, policy version, user ID, timestamp, decision path, and human override.
- •Wealth management audits care about traceability more than clever prompts.
- •
Enforce data residency before inference
- •Route EU client records to EU-hosted infrastructure if required.
- •Do not send sensitive client data to a model endpoint that violates regional storage or transfer rules.
- •
Use hard guardrails around final actions
- •The agent can recommend; it should not publish client communications or approve trades by itself.
- •High-risk categories like private placements, margin products, tax advice wording, and cross-border solicitation should always require human sign-off.
Common Pitfalls
- •
Treating the LLM as the policy engine
- •Don’t ask it to invent rules from scratch.
- •Feed it explicit policy text or retrieved rule snippets; otherwise you get inconsistent decisions across similar cases.
- •
Letting free-form output drive automation
- •If your workflow depends on “looks okay,” you will ship brittle code.
- •Force structured fields with validation before any approval or escalation step runs.
- •
Ignoring jurisdiction and client segment context
- •A message acceptable for an accredited investor in one region may be non-compliant elsewhere.
- •Pass in jurisdiction, account type, product class, and channel every time; otherwise your agent is blind to the actual rule set.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit