How to Build a customer support Agent Using AutoGen in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
customer-supportautogenpythonwealth-management

A customer support agent for wealth management handles routine client questions, routes sensitive requests, and keeps answers inside policy. It matters because the bar is higher than generic support: you need compliance-aware responses, auditability, and tight control over what client data the agent can see or say.

Architecture

  • Client-facing assistant

    • Receives questions about balances, statements, fees, onboarding, transfers, and account access.
    • Keeps the interaction narrow: answer, clarify, or escalate.
  • Policy and compliance layer

    • Checks every response against firm rules.
    • Blocks advice that crosses into regulated financial recommendations.
  • Tool layer

    • Connects to approved systems like CRM, ticketing, knowledge base, and account-status APIs.
    • Never gives the model direct database access.
  • Escalation agent

    • Hands off cases involving complaints, suitability questions, fraud, or legal requests.
    • Produces a structured summary for a human advisor or service rep.
  • Audit and logging

    • Stores prompts, tool calls, outputs, and escalation reasons.
    • Supports internal review and regulatory traceability.
  • Data residency and secrets boundary

    • Keeps PII inside approved regions and systems.
    • Uses environment-based secrets for API keys and service credentials.

Implementation

1) Install AutoGen and define your assistant/client setup

This pattern uses autogen.AssistantAgent for the support bot and autogen.UserProxyAgent to represent the client-side session. In production you would usually swap llm_config values for your approved model endpoint.

import os
import autogen

config_list = [
    {
        "model": "gpt-4o-mini",
        "api_key": os.environ["OPENAI_API_KEY"],
    }
]

llm_config = {
    "config_list": config_list,
    "temperature": 0,
    "timeout": 60,
}

support_agent = autogen.AssistantAgent(
    name="wealth_support_agent",
    system_message=(
        "You are a wealth management customer support agent. "
        "Answer only service questions. Do not provide investment advice. "
        "If the user asks for recommendations, suitability opinions, or legal/tax guidance, escalate."
    ),
    llm_config=llm_config,
)

client = autogen.UserProxyAgent(
    name="client",
    human_input_mode="NEVER",
    code_execution_config=False,
)

2) Add tools for approved account lookups and ticket creation

AutoGen works best when tools are small and explicit. For wealth management support, keep tools limited to read-only status checks and controlled case creation.

from typing import Dict

def get_account_status(account_id: str) -> Dict[str, str]:
    # Replace with a real internal API call behind your gateway.
    return {
        "account_id": account_id,
        "status": "active",
        "kyc_status": "verified",
        "last_statement_date": "2026-03-31",
    }

def create_support_ticket(subject: str, details: str) -> Dict[str, str]:
    # Replace with ServiceNow/Zendesk/Jira integration.
    return {
        "ticket_id": "WMS-10482",
        "status": "open",
        "subject": subject,
    }

support_agent.register_for_llm(name="get_account_status", description="Fetch account status")(get_account_status)
support_agent.register_for_llm(name="create_support_ticket", description="Create a support ticket")(create_support_ticket)

3) Run a controlled conversation with escalation rules

The key pattern is to let the assistant answer simple support questions while forcing escalation on anything advisory or sensitive. In practice you should inspect the output before sending it back to the client.

def is_escalation_needed(message: str) -> bool:
    triggers = [
        "should i invest", "recommend", "best fund", "portfolio allocation",
        "tax advice", "legal advice", "guaranteed return"
    ]
    text = message.lower()
    return any(trigger in text for trigger in triggers)

user_message = (
    "My statement is missing from March. Can you check my account status? "
    "Also tell me if I should move more money into equities."
)

if is_escalation_needed(user_message):
    result = {
        "reply": (
            "I can help with statement access and account servicing. "
            "I can't provide investment recommendations. "
            "I'll create a case for a licensed advisor to follow up."
        )
    }
else:
    result = client.initiate_chat(
        support_agent,
        message=user_message,
        max_turns=2,
    )

print(result)

4) Use a group chat when you need a compliance reviewer

For real workflows, put a compliance agent in the loop. AutoGen’s GroupChat plus GroupChatManager gives you an explicit multi-agent handoff pattern.

compliance_agent = autogen.AssistantAgent(
    name="compliance_reviewer",
    system_message=(
        "Review support responses for policy violations. "
        "Reject investment advice, unapproved promises, or disclosure issues."
    ),
    llm_config=llm_config,
)

groupchat = autogen.GroupChat(
    agents=[client, support_agent, compliance_agent],
    messages=[],
    max_round=4,
)

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

client.initiate_chat(
    manager,
    message="Explain why my transfer is pending and whether I should rebalance my portfolio.",
)

Production Considerations

  • Compliance controls first

    • Put a policy gate in front of every generated response.
    • Block suitability language unless a licensed workflow explicitly permits it.
    • Log which rule caused an escalation or refusal.
  • Audit everything

    • Store prompts, tool inputs/outputs, model version, timestamps, and final responses.
    • Keep immutable records for complaints handling and regulatory review.
    • Make sure audit logs exclude raw secrets and unnecessary PII.
  • Data residency and privacy

    • Route EU/UK client data only through approved regional infrastructure.
    • Redact account numbers, tax IDs, and beneficiary details before model calls where possible.
    • Use private networking or vendor controls that match your firm’s residency requirements.
  • Operational guardrails

    • Rate-limit high-risk intents like fraud claims or wire instructions.
    • Add human approval for any action that changes client state.
    • Monitor refusal rates; spikes often mean prompt drift or broken routing logic.

Common Pitfalls

  • Letting the model answer advisory questions

    • Mistake: treating “Should I buy more tech stocks?” like normal support.
    • Fix: classify intent early and hard-stop into escalation if it crosses into recommendations.
  • Giving tools too much power

    • Mistake: exposing direct write access to CRM or portfolio systems from the agent.
    • Fix: keep tools read-only by default; use separate human-approved workflows for mutations.
  • Ignoring auditability

    • Mistake: only logging final answers.
    • Fix: persist full conversation context plus tool calls so compliance can reconstruct decisions later.
  • Skipping residency constraints

    • Mistake: sending all traffic to one global model endpoint.
    • Fix: segment tenants by region and enforce data boundaries at routing time before any LLM call.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides