How to Build a fraud detection Agent Using CrewAI in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
fraud-detectioncrewaipythonwealth-management

A fraud detection agent for wealth management watches client activity, flags suspicious patterns, and produces an auditable explanation for compliance teams. It matters because false negatives can mean account takeover, unauthorized transfers, or advisor impersonation, while false positives can freeze legitimate high-value transactions and damage client trust.

Architecture

  • Input collector

    • Pulls transactions, profile changes, login events, wire instructions, and advisor notes from your internal systems.
    • Normalizes records into a single case payload.
  • Risk analysis agent

    • Scores the case using rules plus LLM-assisted pattern inspection.
    • Looks for anomalies like new beneficiary setup followed by urgent transfer requests.
  • Compliance reviewer agent

    • Checks findings against wealth-management controls.
    • Verifies KYC/AML triggers, escalation thresholds, and required disclosures.
  • Evidence summarizer

    • Produces a concise incident brief with supporting signals.
    • Writes output in a format analysts can review quickly and store for audit.
  • Escalation router

    • Decides whether to hold the transaction, request manual review, or notify fraud ops.
    • Enforces policy based on severity and confidence.

Implementation

1) Install CrewAI and define your tools

For production work, keep the agent grounded in real data. Use tools to fetch account events and policy snippets instead of asking the model to infer everything from memory.

from crewai import Agent, Task, Crew, Process
from crewai.tools import BaseTool
from pydantic import BaseModel, Field

class CaseInput(BaseModel):
    account_id: str = Field(..., description="Wealth management account ID")
    transaction_id: str = Field(..., description="Transaction being reviewed")

class FetchCaseTool(BaseTool):
    name: str = "fetch_case"
    description: str = "Fetch account activity and transaction context for fraud review."

    def _run(self, account_id: str, transaction_id: str) -> str:
        # Replace with DB/API calls
        return f"""
        account_id={account_id}
        transaction_id={transaction_id}
        recent_events=[
          "new beneficiary added",
          "login from new device",
          "wire transfer initiated within 12 minutes"
        ]
        client_profile=high_net_worth
        jurisdiction=US
        """

class FetchPolicyTool(BaseTool):
    name: str = "fetch_policy"
    description: str = "Fetch fraud escalation policy and compliance rules."

    def _run(self) -> str:
        return """
        If new beneficiary + high-value wire + unusual login => escalate to manual review.
        If client jurisdiction is restricted => require compliance approval before release.
        Always produce an audit summary with evidence references.
        """

2) Create specialized agents

Use one agent for fraud analysis and another for compliance validation. In wealth management, separating these roles keeps the reasoning cleaner and makes audit trails easier to defend.

fraud_agent = Agent(
    role="Fraud Detection Analyst",
    goal="Identify suspicious wealth management activity using available evidence.",
    backstory="You analyze account behavior, transfer patterns, and authentication signals.",
    tools=[FetchCaseTool(), FetchPolicyTool()],
    verbose=True,
)

compliance_agent = Agent(
    role="Compliance Reviewer",
    goal="Validate whether the case meets escalation requirements under policy.",
    backstory="You enforce firm policy, KYC/AML controls, and audit requirements.",
    tools=[FetchPolicyTool()],
    verbose=True,
)

3) Define tasks that force structured outputs

Don’t ask for a vague narrative. Make the first task produce a risk assessment with explicit fields; make the second task validate it against policy.

fraud_task = Task(
    description=(
        "Review the case for fraud indicators. "
        "Use fetch_case(account_id, transaction_id) to inspect the event trail. "
        "Return JSON with keys: risk_level, reasons, evidence, recommended_action."
    ),
    expected_output="A JSON object describing fraud risk and recommended action.",
    agent=fraud_agent,
)

compliance_task = Task(
    description=(
        "Review the fraud assessment against policy. "
        "Confirm whether the recommended action satisfies escalation rules "
        "and audit requirements for wealth management."
    ),
    expected_output="A JSON object with compliance_status and audit_notes.",
    agent=compliance_agent,
)

4) Run the crew and persist an audit record

Use Process.sequential so the compliance step sees the fraud findings first. In regulated workflows you want deterministic ordering unless you have a strong reason not to.

crew = Crew(
    agents=[fraud_agent, compliance_agent],
    tasks=[fraud_task, compliance_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff(inputs={
    "account_id": "WM-1048821",
    "transaction_id": "WIRE-77821"
})

print(result)

For production deployment, wrap kickoff() in a service that stores:

  • input payload hash
  • model version
  • tool responses
  • final decision
  • analyst override if any

That gives you a defensible audit trail when compliance asks why a wire was blocked.

Production Considerations

  • Data residency

    • Keep client data in-region if your wealth platform has jurisdictional constraints.
    • If you use hosted LLMs or external APIs, verify where prompts and tool outputs are processed.
  • Monitoring

    • Track false-positive rate by client segment and transaction type.
    • Monitor tool failures separately from model failures so you can distinguish bad data from bad reasoning.
  • Guardrails

    • Enforce hard rules outside the LLM for high-risk actions like wire holds or account freezes.
    • Use deterministic thresholds for mandatory escalation; let CrewAI explain decisions, not invent them.
  • Auditability

    • Store every intermediate artifact used by Task execution.
    • Make sure reviewers can reconstruct why an alert was raised without re-running the model.

Common Pitfalls

  • Letting the agent decide policy

    • Don’t ask the model to invent escalation rules.
    • Keep policy in tools or config files so legal/compliance can update it without retraining anything.
  • Using unstructured outputs

    • Free-form prose is hard to parse in downstream systems.
    • Force JSON-like outputs in expected_output so your case manager can route alerts reliably.
  • Skipping human review on edge cases

    • Wealth management cases often involve legitimate large transfers with unusual timing.
    • Route ambiguous cases to an analyst instead of auto-blocking them; client impact matters as much as detection quality.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides