How to Build a loan approval Agent Using CrewAI in Python for wealth management
A loan approval agent in wealth management triages applications, pulls the right client context, checks policy and eligibility, and drafts a decision package for a human reviewer. It matters because high-net-worth clients expect fast turnaround, but the bank still needs strict compliance, auditability, and consistent underwriting decisions.
Architecture
- •
Intake layer
- •Normalizes application data from CRM, PDF forms, or API payloads.
- •Validates required fields before any agent work starts.
- •
Client context retriever
- •Pulls portfolio value, liquidity profile, existing exposure, relationship tier, and KYC status.
- •Keeps the agent grounded in wealth-management-specific facts.
- •
Policy reasoning agent
- •Evaluates the request against lending rules, concentration limits, debt-service constraints, and exceptions policy.
- •Produces a structured recommendation with rationale.
- •
Compliance checker
- •Verifies AML/KYC flags, sanctions screening status, suitability constraints, and jurisdiction-specific rules.
- •Forces escalation when policy is ambiguous or incomplete.
- •
Decision composer
- •Assembles the final output into an auditable approval/decline/manual-review packet.
- •Includes citations to source systems and rule outcomes.
- •
Audit logger
- •Persists prompts, tool outputs, intermediate reasoning artifacts, and final decisions.
- •Supports model risk management and regulator review.
Implementation
1) Install CrewAI and define the agent roles
Use one agent for underwriting analysis and one for compliance review. Keep them separate so you can audit decisions independently and swap policies without touching the rest of the workflow.
from crewai import Agent
underwriting_agent = Agent(
role="Loan Underwriter",
goal="Evaluate loan applications for wealth management clients using policy and client context",
backstory=(
"You are a senior private banking underwriter. "
"You assess repayment capacity, collateral quality, concentration risk, "
"and relationship value while staying within lending policy."
),
verbose=True,
)
compliance_agent = Agent(
role="Compliance Reviewer",
goal="Check loan applications for regulatory and internal policy issues",
backstory=(
"You are a banking compliance specialist. "
"You verify KYC/AML status, suitability constraints, jurisdictional issues, "
"and escalation triggers before any lending decision is finalized."
),
verbose=True,
)
2) Add tasks with structured outputs
In wealth management, free-form answers are not enough. Force the model to return a decision object so downstream systems can store it in case management or core banking.
from crewai import Task
underwriting_task = Task(
description=(
"Review this loan application using client context and internal lending policy. "
"Return JSON with fields: decision, reasons, risks, required_conditions."
),
expected_output="A JSON object with an underwriting recommendation.",
agent=underwriting_agent,
)
compliance_task = Task(
description=(
"Review the same application for compliance issues. "
"Return JSON with fields: cleared_for_review, issues_found, escalation_required."
),
expected_output="A JSON object with compliance findings.",
agent=compliance_agent,
)
3) Wire in tools for real bank data access
CrewAI agents become useful when they can query systems of record. In production you would wrap approved internal APIs as tools; below is a simple pattern using BaseTool.
from crewai import Crew
from crewai.tools import BaseTool
class ClientProfileTool(BaseTool):
name: str = "client_profile_lookup"
description: str = "Fetch wealth client profile data from internal systems"
def _run(self, client_id: str) -> str:
return (
f"client_id={client_id}, portfolio_value=12500000,"
f" liquidity=3200000, kyc_status=cleared,"
f" aml_status=cleared, relationship_tier=private_banking"
)
profile_tool = ClientProfileTool()
underwriting_agent.tools = [profile_tool]
compliance_agent.tools = [profile_tool]
4) Execute the crew and pass application context
Use Crew with sequential execution so compliance can gate the final recommendation. That pattern fits regulated workflows better than letting multiple agents free-run.
application = {
"client_id": "C-10482",
"requested_amount": 750000,
"term_months": 36,
"purpose": "Portfolio-backed working capital",
"jurisdiction": "US",
}
crew = Crew(
agents=[underwriting_agent, compliance_agent],
tasks=[underwriting_task, compliance_task],
verbose=True,
)
result = crew.kickoff(inputs={"application": application})
print(result)
A practical production version should also add a deterministic post-processor that parses the returned JSON and enforces hard rules:
- •Decline if AML/KYC is not cleared
- •Escalate if requested amount exceeds exposure limits
- •Route to human review if any field is missing or malformed
Production Considerations
- •
Auditability
- •Store every input payload, tool response, task output, and final recommendation.
- •Keep immutable logs with timestamps and model/version identifiers for model risk review.
- •
Data residency
- •Keep client PII inside approved regions and approved vendors only.
- •If your bank operates across jurisdictions, route EU client data to EU-hosted infrastructure and avoid cross-border prompt leakage.
- •
Guardrails
- •Enforce hard policy checks outside the LLM before any decision is accepted.
- •Use schema validation on outputs so a malformed JSON response never reaches core banking.
- •
Monitoring
- •Track approval rate by segment, manual-review rate, exception frequency, and false-positive compliance escalations.
- •Watch for drift between agent recommendations and human underwriters by product type or region.
Common Pitfalls
- •
Letting the agent make final credit decisions without deterministic controls
- •Fix it by keeping approval as a recommendation until rule checks pass and a human signs off where required.
- •
Mixing underwriting logic with compliance logic in one agent
- •Fix it by separating responsibilities into distinct agents and tasks so you can audit each path independently.
- •
Ignoring structured output validation
- •Fix it by requiring JSON responses and validating them against a schema before persistence or downstream execution.
- •
Using external tools without residency or access controls
- •Fix it by wrapping only approved internal APIs as tools and restricting data access by region, role, and client segment.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit