How to Build a underwriting Agent Using CrewAI in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
underwritingcrewaipythonwealth-management

An underwriting agent for wealth management takes a client profile, portfolio data, risk tolerance, liquidity needs, and policy constraints, then produces a decision-ready recommendation with evidence. It matters because wealth firms need faster onboarding and suitability checks without losing control over compliance, auditability, and jurisdiction-specific rules.

Architecture

  • Client Intake Layer

    • Normalizes inputs from CRM, KYC/AML systems, portfolio APIs, and advisor notes.
    • Converts messy real-world data into a consistent schema the agent can reason over.
  • Policy and Suitability Engine

    • Encodes firm rules: minimum net worth thresholds, concentration limits, restricted asset classes, and product eligibility.
    • Applies wealth-management-specific constraints like fiduciary duty and suitability obligations.
  • CrewAI Task Orchestration

    • Uses Agent, Task, and Crew to separate research, analysis, and decision synthesis.
    • Keeps the workflow auditable by making each step explicit.
  • Evidence Retrieval Layer

    • Pulls supporting facts from approved sources: investment policy statements, product sheets, client documents, and compliance manuals.
    • Prevents the model from guessing when it should cite.
  • Decision Output Service

    • Produces a structured recommendation: approve, reject, or escalate.
    • Includes rationale, risk flags, and required human-review notes.
  • Audit and Logging Layer

    • Stores prompts, outputs, source references, model version, and timestamps.
    • Required for regulatory review and internal control testing.

Implementation

1) Install CrewAI and define your data contract

For underwriting in wealth management, start with a strict input schema. If the agent accepts arbitrary JSON blobs, you will end up with inconsistent decisions and weak audit trails.

from pydantic import BaseModel
from typing import List

class UnderwritingInput(BaseModel):
    client_id: str
    jurisdiction: str
    net_worth_usd: float
    liquid_assets_usd: float
    annual_income_usd: float
    risk_tolerance: str
    investment_horizon_years: int
    requested_product: str
    restricted_assets: List[str] = []

2) Create focused agents with explicit roles

Use multiple agents instead of one giant prompt. One agent gathers facts, another checks policy fit, and a final one writes the underwriting memo.

from crewai import Agent

research_agent = Agent(
    role="Wealth Profile Analyst",
    goal="Extract relevant underwriting facts from client data and approved documents.",
    backstory="You analyze client suitability data for wealth management onboarding.",
    verbose=True,
)

compliance_agent = Agent(
    role="Compliance Reviewer",
    goal="Check the request against firm policy, suitability rules, and jurisdiction constraints.",
    backstory="You enforce wealth management compliance requirements with zero tolerance for unsupported claims.",
    verbose=True,
)

memo_agent = Agent(
    role="Underwriting Memo Writer",
    goal="Produce a concise underwriting recommendation with evidence and escalation notes.",
    backstory="You write decision memos for advisors and compliance teams.",
    verbose=True,
)

3) Define tasks that force evidence-based output

The key pattern is to make each task produce something usable by the next task. In production this keeps the workflow inspectable and makes failures easier to isolate.

from crewai import Task

research_task = Task(
    description=(
        "Review the client's profile and extract underwriting-relevant facts. "
        "Summarize net worth adequacy, liquidity position, income stability signals, "
        "risk tolerance alignment, and any restricted assets."
    ),
    expected_output="A structured fact summary with risks and missing information.",
    agent=research_agent,
)

compliance_task = Task(
    description=(
        "Evaluate the fact summary against wealth management policy. "
        "Flag suitability issues, concentration concerns, jurisdiction conflicts, "
        "and any reason this case must be escalated to a human reviewer."
    ),
    expected_output="A compliance assessment with approve/reject/escalate recommendation.",
    agent=compliance_agent,
)

memo_task = Task(
    description=(
        "Write the final underwriting memo using only prior task outputs. "
        "Include recommendation, rationale, key risks, and audit-friendly notes."
    ),
    expected_output="A final underwriting memo ready for advisor review.",
    agent=memo_agent,
)

4) Assemble the crew and run it with real inputs

This is the actual orchestration pattern you want in a service layer. Keep execution separate from your API handler so you can test it offline.

from crewai import Crew, Process

crew = Crew(
    agents=[research_agent, compliance_agent, memo_agent],
    tasks=[research_task, compliance_task, memo_task],
    process=Process.sequential,
    verbose=True,
)

client_data = UnderwritingInput(
    client_id="C-10291",
    jurisdiction="US-NY",
    net_worth_usd=2500000,
    liquid_assets_usd=900000,
    annual_income_usd=420000,
    risk_tolerance="moderate",
    investment_horizon_years=7,
    requested_product="Structured Note",
    restricted_assets=["private_credit_fund"]
)

result = crew.kickoff(inputs={"client_data": client_data.model_dump()})
print(result)

If you want better control over tool use later on, add approved retrieval tools only. For example:

  • internal policy document search
  • CRM lookup for verified client fields
  • product eligibility service
  • jurisdiction rules service

Keep those tools read-only unless there is a hard business case for writes.

Production Considerations

  • Compliance first

    • Every recommendation must include an evidence trail tied to approved sources.
    • Store task outputs for audit review so compliance can reconstruct why a decision was made.
  • Data residency

    • Wealth data often falls under regional storage requirements.
    • Pin execution to approved regions and avoid sending PII to non-compliant providers or cross-border endpoints.
  • Human-in-the-loop escalation

    • Auto-approve only low-risk cases with clear policy fit.
    • Escalate borderline cases like high concentration exposure, illiquid holdings mismatch, or uncertain source data.
  • Monitoring

    • Track approval rates by product type, false escalations, latency per task stage, and missing-field frequency.
    • Alert on drift when recommendations change after prompt or model updates.

Common Pitfalls

  1. Using one agent for everything

    • This creates vague reasoning and hard-to-debug failures.
    • Split research, policy checking, and memo writing into separate agents/tasks.
  2. Letting the model infer missing financial facts

    • In wealth management that is dangerous.
    • If net worth or liquidity is missing or stale, return “escalate” instead of guessing.
  3. Skipping jurisdiction-specific rules

    • A rule set that works in one region may fail in another.
    • Encode residency constraints, local suitability rules,,and product restrictions before deployment.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides