How to Build a policy Q&A Agent Using CrewAI in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
policy-q-acrewaipythonwealth-managementpolicy-qanda

A policy Q&A agent for wealth management answers questions about internal procedures, product rules, suitability constraints, fee policies, and client servicing guidelines. It matters because advisors and support teams need fast answers that are consistent, auditable, and compliant with firm policy instead of relying on memory or scattered documents.

Architecture

  • Policy document ingestion

    • Pulls PDFs, Word docs, SharePoint exports, or approved knowledge base pages.
    • Normalizes them into text chunks with metadata like policy name, version, effective date, jurisdiction, and business line.
  • Retriever / knowledge layer

    • Finds the most relevant policy passages for each question.
    • Must support strict filtering by region, product type, client segment, and document version.
  • CrewAI agents

    • One agent answers the user’s question.
    • A second agent verifies the answer against policy language and flags ambiguity.
    • Optional third agent handles compliance review for restricted topics.
  • Tools

    • A document search tool for retrieval.
    • A citation tool that returns source snippets and document IDs.
    • Optional CRM or case-management lookup tool for client context.
  • Guardrail layer

    • Blocks advice outside approved scope.
    • Forces citations.
    • Escalates when the question touches suitability, tax advice, discretionary trading, AML/KYC, or jurisdiction-specific rules.
  • Audit logging

    • Stores the question, retrieved sources, final answer, model version, timestamp, and reviewer outcome.
    • This is non-negotiable in wealth management.

Implementation

  1. Install CrewAI and define your policy retrieval tool

    Start with a simple tool that searches your approved policy corpus. In production this would hit a vector store or enterprise search index; the pattern below keeps it concrete.

    from crewai.tools import BaseTool
    from pydantic import BaseModel, Field
    from typing import Type
    
    class PolicySearchInput(BaseModel):
        query: str = Field(..., description="User question or policy lookup term")
    
    class PolicySearchTool(BaseTool):
        name: str = "policy_search"
        description: str = "Search approved wealth management policy documents"
        args_schema: Type[BaseModel] = PolicySearchInput
    
        def _run(self, query: str) -> str:
            # Replace with vector DB / enterprise search
            policies = {
                "fee waiver": "Policy FW-12 v3.2: Fee waivers require branch manager approval...",
                "suitability": "Policy SU-08 v4.1: Product recommendations must match risk profile...",
                "kyc": "Policy KYC-03 v2.0: Enhanced due diligence is required for PEPs..."
            }
            matches = [v for k, v in policies.items() if k in query.lower()]
            return "\n".join(matches) if matches else "No matching policy found."
    
  2. Create an answer agent and a compliance verifier

    CrewAI’s Agent class lets you define focused roles. For wealth management I always separate answer generation from verification so the model does not self-certify its own output.

    from crewai import Agent
    
    answer_agent = Agent(
        role="Wealth Management Policy Answerer",
        goal="Answer internal policy questions using only approved policy sources",
        backstory=(
            "You support advisors and operations teams in a regulated wealth "
            "management environment. You must be precise, cite sources when possible,"
            "and avoid giving advice outside firm policy."
        ),
        tools=[PolicySearchTool()],
        verbose=True,
        allow_delegation=False,
    )
    
    compliance_agent = Agent(
        role="Compliance Reviewer",
        goal="Check whether the proposed answer is consistent with wealth management policy",
        backstory=(
            "You review responses for suitability risk, regulatory exposure,"
            "missing citations, and ambiguous language."
        ),
        tools=[PolicySearchTool()],
        verbose=True,
        allow_delegation=False,
    )
    
  3. Define tasks and run them as a crew

    Use Task, Crew, and Process.sequential so the answer is drafted first and then reviewed. The key pattern is to force the first task to produce a structured response that the second task can inspect.

    from crewai import Task, Crew, Process
    
    question = "Can we waive account maintenance fees for a high-net-worth client?"
    
    answer_task = Task(
        description=(
            f"Answer this internal policy question: {question}\n"
            "Use only approved policy information returned by the tool.\n"
            "Include: direct answer, rationale, and source references."
        ),
        expected_output="A concise policy-compliant answer with citations.",
        agent=answer_agent,
    )
    
    review_task = Task(
        description=(
            "Review the drafted answer for compliance issues.\n"
            "Flag any unsupported claims, missing escalation steps,"
            "or language that could be interpreted as financial advice."
        ),
        expected_output="A compliance verdict plus required edits if any.",
        agent=compliance_agent,
    )
    
    crew = Crew(
        agents=[answer_agent, compliance_agent],
        tasks=[answer_task, review_task],
        process=Process.sequential,
        verbose=True,
    )
    
    result = crew.kickoff()
    print(result)
    
  4. Add an escalation rule for restricted topics

    Wealth management needs hard stops. If the user asks about portfolio recommendations, tax treatment, legal interpretation, or jurisdiction-specific exemptions without enough context, return an escalation instead of an answer.

    restricted_terms = ["recommend", "tax", "legal", "suitability", "best fund", "guarantee"]
    
    def should_escalate(question: str) -> bool:
        q = question.lower()
        return any(term in q for term in restricted_terms)
    
    if should_escalate(question):
        print("Escalate to compliance or licensed advisor.")
    else:
        print(crew.kickoff())
    

Production Considerations

  • Compliance controls

    • Enforce source-only answering from approved documents.
    • Require citations in every response.
    • Add a human-in-the-loop path for suitability-related questions and anything that could be interpreted as advice.
  • Auditability

    • Log the raw question, retrieved policy chunks, final response, timestamps, user identity, and model version.
    • Keep immutable audit records for supervisory review and dispute resolution.
  • Data residency

    • Keep client data and document embeddings inside the required region.
    • If your firm operates across jurisdictions, partition indexes by country or legal entity so one region’s policies do not leak into another.
  • Monitoring

    • Track hallucination rate on sampled answers.
    • Monitor escalation rate by topic.
    • Alert when answers lack citations or when retrieval returns stale policies past their effective date.

Common Pitfalls

  1. Letting one agent both answer and approve its own output

    • This looks efficient until you need evidence of independent review.
    • Fix it by splitting drafting and compliance verification into separate Agent instances with distinct Tasks.
  2. Using one global knowledge base for all regions

    • Wealth management policies vary by jurisdiction, entity structure, client type, and product shelf.
    • Fix it by filtering retrieval on region, business line, effective date, and document status before passing context to CrewAI.
  3. Returning uncited answers

    • In regulated environments uncited responses are operational risk.
    • Fix it by making citations part of the expected output schema and rejecting any response that does not reference specific policy IDs or snippets.
  4. Treating every question as safe to answer

    • Questions about fees may sound harmless but can hide exceptions tied to client classification or account type.
    • Fix it with explicit escalation rules for restricted topics and ambiguous cases before the agent generates a final response.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides