How to Build a loan approval Agent Using AutoGen in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
loan-approvalautogenpythonwealth-management

A loan approval agent in wealth management takes a client’s application, pulls the relevant portfolio and relationship data, checks policy constraints, and produces a decision recommendation with an audit trail. The point is not to automate judgment blindly; it is to compress analyst time on routine cases while keeping compliance, suitability, and exception handling intact.

Architecture

  • Client intake layer

    • Normalizes loan request data: amount, tenor, collateral, jurisdiction, relationship type.
    • Rejects incomplete submissions before any agent work starts.
  • Policy and eligibility engine

    • Encodes lending rules: LTV thresholds, minimum AUM, concentration limits, KYC/AML status.
    • Produces deterministic pass/fail signals that agents can cite.
  • AutoGen multi-agent workflow

    • Uses a coordinator plus specialist agents:
      • AssistantAgent for analysis
      • UserProxyAgent for tool execution and controlled escalation
    • Separates reasoning from execution.
  • Data access tools

    • Pulls approved sources only: CRM, portfolio system, risk engine, document store.
    • Keeps raw PII out of model prompts where possible.
  • Decision and audit layer

    • Captures every intermediate recommendation.
    • Stores rationale, inputs used, policy checks, and final outcome for review.

Implementation

1) Install AutoGen and define the decision context

For wealth management, keep the prompt narrow and the data structured. You want the agent to reason over approved facts, not free-form client narratives.

from autogen import AssistantAgent, UserProxyAgent
import json

config_list = [
    {
        "model": "gpt-4o-mini",
        "api_key": "YOUR_OPENAI_API_KEY",
    }
]

llm_config = {
    "config_list": config_list,
    "temperature": 0,
}

loan_case = {
    "client_id": "C10291",
    "jurisdiction": "US",
    "aum_usd": 2450000,
    "loan_amount_usd": 350000,
    "collateral_value_usd": 900000,
    "kyc_status": "passed",
    "aml_status": "passed",
    "risk_rating": "medium",
    "relationship_tenure_years": 6,
}

policy = {
    "max_ltv": 0.5,
    "min_aum_for_unsecured": 1000000,
    "requires_kyc_passed": True,
    "requires_aml_passed": True,
}

2) Create specialist agents with explicit responsibilities

Use one agent to analyze the case and another to execute controlled actions. AssistantAgent handles reasoning; UserProxyAgent can be configured to require human approval before running anything sensitive.

loan_underwriter = AssistantAgent(
    name="loan_underwriter",
    llm_config=llm_config,
    system_message=(
        "You are a loan approval analyst for wealth management. "
        "Use only provided case data and policy. "
        "Return a concise recommendation with reasons, risks, and policy references. "
        "Do not invent missing facts."
    ),
)

compliance_proxy = UserProxyAgent(
    name="compliance_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
)

3) Run a structured assessment and force an auditable output

The clean pattern is: send structured JSON in, get structured JSON out. That makes downstream storage and review much easier than parsing prose.

prompt = f"""
Assess this loan application using the policy below.

CASE:
{json.dumps(loan_case, indent=2)}

POLICY:
{json.dumps(policy, indent=2)}

Return JSON with keys:
- decision: approve | decline | escalate
- reasons: array of strings
- policy_checks: object
- suggested_conditions: array of strings
"""

result = compliance_proxy.initiate_chat(
    loan_underwriter,
    message=prompt,
)

print(result.chat_history[-1]["content"])

4) Add deterministic pre-checks before the agent makes a recommendation

In production, do not let the model be the first line of defense. Hard rules should run before any LLM call.

def precheck(case: dict, policy: dict) -> list[str]:
    issues = []

    if policy["requires_kyc_passed"] and case["kyc_status"] != "passed":
        issues.append("KYC not passed")

    if policy["requires_aml_passed"] and case["aml_status"] != "passed":
        issues.append("AML not passed")

    ltv = case["loan_amount_usd"] / case["collateral_value_usd"]
    if ltv > policy["max_ltv"]:
        issues.append(f"LTV {ltv:.2f} exceeds max {policy['max_ltv']:.2f}")

    return issues

issues = precheck(loan_case, policy)
if issues:
    print({
        "decision": "decline",
        "reasons": issues,
        "policy_checks": {"precheck_failed": True},
        "suggested_conditions": [],
    })
else:
    print("Precheck passed; send to AutoGen agent.")

Production Considerations

  • Keep sensitive data out of prompts

    • Mask account numbers, tax IDs, and full statements.
    • Pass only fields required for underwriting.
    • For wealth management clients, this reduces privacy exposure and model leakage risk.
  • Enforce residency and retention controls

    • Route EU client cases to EU-hosted inference or approved private deployment.
    • Store transcripts in region-bound storage with retention aligned to regulatory policy.
    • Audit logs should be immutable or WORM-backed where required.
  • Add human approval gates for exceptions

    • Any decline override, high-value facility, or policy exception should require analyst sign-off.
    • Use AutoGen for recommendation generation; do not let it finalize credit decisions alone.
    • This matters when decisions affect fiduciary relationships and suitability obligations.
  • Monitor drift in both data and outcomes

    • Track approval rates by segment: AUM band, jurisdiction, advisor team.
    • Watch for changes in LTV distributions or exception frequency.
    • Compare agent recommendations against post-review outcomes from credit committees.

Common Pitfalls

  • Letting the model infer missing facts

    • Bad pattern: asking the agent to “fill in” collateral quality or client intent.
    • Fix: require explicit fields; if missing, return escalate.
  • Skipping deterministic controls

    • Bad pattern: sending every application straight into AssistantAgent.
    • Fix: run hard checks first for KYC/AML/LTV/jurisdiction constraints.
  • No audit trail for recommendations

    • Bad pattern: storing only the final decision text.
    • Fix: persist input payloads, policy version hash, prompt version, model version (gpt-4o-mini or whatever you deploy), and final output together.

A good wealth-management loan agent is boring in the right places. It should be predictable on policy checks, strict on data handling, and useful only where human analysts add value: borderline cases, exception review, and faster turnaround on standard applications.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides