How to Build a customer support Agent Using CrewAI in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
customer-supportcrewaipythonwealth-management

A customer support agent for wealth management handles client questions about portfolios, fees, account access, statements, transfers, and basic policy explanations. It matters because the support layer sits right next to regulated client data, so every response needs to be accurate, auditable, and constrained by compliance rules.

Architecture

  • Client intake layer
    • Takes the incoming message, identifies intent, and extracts entities like account type, product, and urgency.
  • Policy retrieval layer
    • Pulls approved answers from internal documents: fee schedules, onboarding guides, KYC/AML policies, service SLAs, and escalation rules.
  • Support reasoning agent
    • Uses CrewAI to classify the request, draft a response, and decide whether the case can be answered or must be escalated.
  • Compliance guardrail layer
    • Blocks advice outside scope, flags restricted topics like tax advice or portfolio recommendations, and enforces disclaimers.
  • Escalation workflow
    • Routes sensitive cases to a human advisor or operations queue with full context and audit metadata.
  • Audit and observability layer
    • Stores prompts, retrieved sources, tool calls, and final responses for review and regulatory traceability.

Implementation

1) Install CrewAI and define the support tools

For wealth management support, the agent should not invent policy answers. Give it tools that only return approved internal content.

from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from typing import Type

class PolicyQueryInput(BaseModel):
    query: str = Field(..., description="Client question or policy lookup term")

class ApprovedPolicyLookupTool(BaseTool):
    name: str = "approved_policy_lookup"
    description: str = "Fetch approved wealth management support policies and FAQs."
    args_schema: Type[BaseModel] = PolicyQueryInput

    def _run(self, query: str) -> str:
        knowledge_base = {
            "statement": "Statements are generated monthly and available in the client portal by the 3rd business day.",
            "fee": "Advisory fees are disclosed in the client agreement. Support may explain where to find them but not interpret them.",
            "transfer": "External transfers require identity verification and may take 2-5 business days.",
            "tax": "Tax questions must be escalated to a licensed advisor. Support cannot provide tax advice.",
        }
        q = query.lower()
        for key, value in knowledge_base.items():
            if key in q:
                return value
        return "No approved policy found. Escalate to a human advisor."

2) Create a support agent with strict boundaries

Use Agent with a narrow role. In wealth management, the system prompt matters as much as the model choice.

from crewai import Agent

support_agent = Agent(
    role="Wealth Management Client Support Specialist",
    goal="Resolve routine client support requests using approved policies only.",
    backstory=(
        "You support wealth management clients. "
        "You never provide investment advice, tax advice, or legal opinions. "
        "You always escalate restricted topics."
    ),
    tools=[ApprovedPolicyLookupTool()],
    verbose=True,
    allow_delegation=False,
)

3) Define tasks for classification and response drafting

A good pattern is to separate classification from response generation. That makes audit trails cleaner and reduces accidental overreach.

from crewai import Task

classify_task = Task(
    description=(
        "Classify this client request: {client_message}. "
        "Determine whether it is routine support or requires escalation."
    ),
    expected_output="A short classification with risk level and escalation decision.",
    agent=support_agent,
)

response_task = Task(
    description=(
        "Using only approved policy information, draft a client response for: {client_message}. "
        "If the request involves advice, performance commentary, taxes, or suitability,"
        " return an escalation note instead of answering directly."
    ),
    expected_output="A compliant client-ready response or escalation note.",
    agent=support_agent,
)

4) Run the crew and wrap it with a guardrail check

CrewAI’s Crew executes the workflow. Add a simple post-check before sending anything to production systems.

from crewai import Crew
import re

def compliance_check(text: str) -> bool:
    blocked_patterns = [
        r"\bguaranteed\b",
        r"\bwill outperform\b",
        r"\bbuy\b|\bsell\b|\bhold\b",
        r"\btax advice\b",
        r"\blegal advice\b",
    ]
    return not any(re.search(pattern, text.lower()) for pattern in blocked_patterns)

crew = Crew(
    agents=[support_agent],
    tasks=[classify_task, response_task],
    verbose=True,
)

result = crew.kickoff(inputs={
    "client_message": "Can you explain why my portfolio underperformed this quarter?"
})

final_text = str(result)

if compliance_check(final_text):
    print(final_text)
else:
    print("Escalate to licensed advisor: response failed compliance check.")

The important part here is not just getting an answer back. It is making sure the agent stays inside a narrow operational lane where it can explain process questions but never cross into regulated advisory territory.

Production Considerations

  • Data residency
    • Keep prompts, retrieved documents, and logs inside approved regions. Wealth firms often have jurisdiction-specific storage rules tied to client domicile and entity structure.
  • Audit logging
    • Persist input message, retrieved policy snippets, final output, model version, timestamp, and escalation decision. Regulators care about what was said and why it was said.
  • Guardrails on scope
    • Hard-block investment recommendations, performance predictions, tax guidance, AML interpretation beyond scripted language, and suitability statements.
  • Human handoff
    • Route any ambiguous or sensitive case to a licensed advisor or operations queue with full context. The agent should reduce workload, not replace judgment.

Common Pitfalls

  • Using open-ended prompts without policy grounding
    • If you let the model answer from memory alone it will hallucinate fees, timelines, or account rules. Always anchor responses in approved internal content.
  • Treating compliance as a postscript
    • A disclaimer at the end is not enough. Build scope checks before generation and again before delivery.
  • Logging too little for audits
    • Storing only the final answer is weak evidence. Keep source documents used by retrieval plus intermediate classification decisions.
  • Letting one agent do everything
    • Support triage is not portfolio analysis. Split classification, retrieval, drafting,and escalation so each step has a clear control point.

If you want this to hold up in production at a wealth manager or private bank depth of scrutiny is non-negotiable. The winning pattern is simple: narrow scope + approved knowledge + explicit escalation + complete audit trail.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides