How to Build a compliance checking Agent Using CrewAI in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
compliance-checkingcrewaipythonwealth-management

A compliance checking agent in wealth management reviews client communications, portfolio actions, and advisory notes against policy rules before anything is sent or executed. It matters because a single unsuitable recommendation, missing disclosure, or jurisdiction breach can create regulatory exposure, client harm, and audit headaches.

Architecture

  • Policy ingestion layer

    • Loads suitability rules, restricted lists, disclosure requirements, and jurisdiction-specific constraints.
    • In practice, this should come from versioned documents or a compliance rules service, not hardcoded text.
  • Compliance analyst agent

    • Evaluates a proposed action or message against policy.
    • Produces structured findings like pass, review, block, plus reasons and citations.
  • Tooling layer

    • Gives the agent access to approved sources such as product restrictions, KYC profile summaries, and house policy snippets.
    • Keep tool outputs deterministic and small.
  • Task orchestration

    • Uses CrewAI Agent, Task, and Crew objects to run checks consistently.
    • Each task should map to one compliance decision point.
  • Audit output store

    • Persists the input, decision, rationale, policy version, model version, and timestamp.
    • This is what your compliance team will need during reviews.

Implementation

1. Install and define the compliance scope

For wealth management, start with a narrow scope: suitability checks, restricted products, disclosure presence, and jurisdiction flags. Do not try to solve all compliance in one pass.

pip install crewai crewai-tools pydantic
from pydantic import BaseModel
from typing import Literal

class ComplianceResult(BaseModel):
    decision: Literal["pass", "review", "block"]
    rationale: str
    citations: list[str]
    risk_flags: list[str]

2. Build a deterministic tool for policy lookup

CrewAI agents work better when they can query approved policy snippets instead of hallucinating from memory. In wealth management, this should point to versioned policy content with residency-aware storage if required by your regulator.

from crewai_tools import tool

POLICY_DB = {
    "suitability": "Advisers must not recommend high-risk products to conservative clients.",
    "disclosure": "All fee-bearing recommendations must include a fee disclosure statement.",
    "restricted_products": "Do not recommend products on the restricted list.",
}

@tool("get_policy_snippet")
def get_policy_snippet(topic: str) -> str:
    """Return an approved policy snippet by topic."""
    return POLICY_DB.get(topic.lower(), "Policy topic not found.")

3. Create the agent and task with CrewAI

Use a focused agent with one job: check the proposed action against policy and return structured output. Keep the instructions explicit so it behaves like a reviewer, not a generic assistant.

from crewai import Agent, Task, Crew, Process

compliance_agent = Agent(
    role="Wealth Management Compliance Reviewer",
    goal="Review client-facing recommendations for suitability and disclosure issues.",
    backstory=(
        "You review advisory actions for wealth management firms. "
        "You must identify policy breaches, missing disclosures, and jurisdiction risks."
    ),
    tools=[get_policy_snippet],
    verbose=True,
)

review_task = Task(
    description=(
        "Review the following recommendation for compliance:\n"
        "- Client profile: conservative risk tolerance\n"
        "- Jurisdiction: UK\n"
        "- Proposed action: recommend leveraged crypto ETF\n"
        "- Disclosure text: 'This product may fluctuate.'\n\n"
        "Check suitability, restricted products, and disclosure completeness. "
        "Return a concise assessment."
    ),
    expected_output="A compliance assessment with decision, rationale, citations, and risk flags.",
    agent=compliance_agent,
)

4. Run the crew and enforce structured parsing

CrewAI returns text by default. In production you should parse that output into a schema before passing it downstream or storing it in an audit log.

import json

crew = Crew(
    agents=[compliance_agent],
    tasks=[review_task],
    process=Process.sequential,
)

result = crew.kickoff()

print(result)

# Example post-processing pattern:
raw_text = str(result)
audit_record = {
    "policy_version": "2026-04",
    "model_output": raw_text,
}

print(json.dumps(audit_record, indent=2))

If you want stronger structure control, make the task ask for JSON only and validate it with Pydantic before any execution step. That is the pattern I use when building review gates for order routing or outbound advice.

Production Considerations

  • Deploy as a pre-execution gate

    • Put this agent between advisory generation and client delivery.
    • For trade recommendations, block execution until the agent returns pass or human review clears it.
  • Log everything needed for audit

    • Store input payloads, policy version IDs, timestamps, reviewer identity if human override happens, and final decision.
    • Wealth management audits often require reconstructing why something was allowed months later.
  • Respect data residency

    • Client profiles may contain regulated personal data.
    • Keep prompts and tool data inside approved regions; do not send raw PII to external systems unless your legal posture allows it.
  • Add hard guardrails outside the LLM

    • Use deterministic checks for restricted lists, blacklist securities, country restrictions, and mandatory disclosure templates.
    • Let the agent explain decisions; do not let it be the only enforcement layer.

Common Pitfalls

  • Using the agent as the source of truth

    • Mistake: asking the model to decide everything from scratch.
    • Fix: combine CrewAI with rule-based checks for non-negotiable controls like restricted securities or mandatory disclosures.
  • Letting outputs stay unstructured

    • Mistake: accepting free-form text in downstream systems.
    • Fix: require JSON-like outputs and validate them before storage or action.
  • Ignoring jurisdiction-specific policy

    • Mistake: applying one global compliance prompt across all clients.
    • Fix: inject region-specific rules into tools and tasks so UK MiFID-style checks do not get mixed with US FINRA-style logic.
  • Skipping audit metadata

    • Mistake: logging only the final answer.
    • Fix: persist prompt input summary, policy version, agent version, tool results, and override history so compliance can replay decisions later.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides