How to Build a underwriting Agent Using AutoGen in Python for lending

By Cyprian AaronsUpdated 2026-04-21
underwritingautogenpythonlending

An underwriting agent for lending takes borrower data, pulls the right signals, applies policy rules, and produces a decision package you can review or route to a human. It matters because lending decisions need to be fast, consistent, and auditable, while still respecting compliance, data residency, and credit policy constraints.

Architecture

  • Loan intake service
    • Accepts application payloads: identity, income, liabilities, requested amount, term, jurisdiction.
  • Policy engine
    • Encodes lender rules: minimum DTI, max LTV, prohibited states/countries, documentation requirements.
  • AutoGen agent group
    • Orchestrates analysis between a primary underwriting agent and specialized assistants for fraud, compliance, and explanation.
  • Evidence store
    • Persists source documents, extracted features, model outputs, and the final decision trail for audit.
  • Decision formatter
    • Converts the agent’s output into a structured underwriting memo with approve/decline/refer reasons.
  • Human review queue
    • Captures borderline cases and exceptions that require manual approval.

Implementation

1) Install AutoGen and define the underwriting inputs

Use the current AutoGen Python package and keep your underwriting payload explicit. For lending systems, avoid free-form input where possible; structure everything so it can be validated and logged.

from dataclasses import dataclass
from typing import Optional

@dataclass
class LoanApplication:
    applicant_id: str
    annual_income: float
    monthly_debt: float
    requested_amount: float
    property_value: Optional[float]
    state: str
    purpose: str
    has_bank_statements: bool
    has_paystubs: bool

def debt_to_income(monthly_debt: float, annual_income: float) -> float:
    return (monthly_debt * 12) / annual_income if annual_income else 1.0

2) Create an underwriting assistant with a strict system prompt

The key is not “let the model decide.” The key is “let the model reason over policy-bound inputs and produce a structured recommendation.” In AutoGen, AssistantAgent is the main worker here.

import autogen

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": "YOUR_OPENAI_API_KEY",
        }
    ],
    "temperature": 0,
}

underwriter = autogen.AssistantAgent(
    name="underwriter",
    llm_config=llm_config,
    system_message=(
        "You are an underwriting assistant for consumer lending. "
        "Use only the provided application data and policy rules. "
        "Return JSON with keys: decision, reasons, risks, missing_docs, "
        "manual_review_required. Do not invent facts. "
        "If data is incomplete or policy thresholds are exceeded, recommend refer or decline."
    ),
)

user_proxy = autogen.UserProxyAgent(
    name="policy_gateway",
    human_input_mode="NEVER",
)

3) Run a policy-bound assessment and parse the result

This example sends a single application through AutoGen using initiate_chat. In production you would wrap this in an API endpoint and persist both prompt and response for audit.

import json

app = LoanApplication(
    applicant_id="A12345",
    annual_income=120000,
    monthly_debt=1800,
    requested_amount=250000,
    property_value=320000,
    state="CA",
    purpose="purchase",
    has_bank_statements=True,
    has_paystubs=False,
)

dti = debt_to_income(app.monthly_debt, app.annual_income)
ltv = app.requested_amount / app.property_value if app.property_value else None

policy = {
    "max_dti": 0.43,
    "max_ltv": 0.80,
    "restricted_states": ["NY"],
}

prompt = f"""
Assess this loan application against policy.

Application:
{json.dumps(app.__dict__, indent=2)}

Derived metrics:
- dti: {dti:.3f}
- ltv: {ltv:.3f if ltv is not None else 'null'}

Policy:
{json.dumps(policy, indent=2)}

Return valid JSON only.
"""

chat_result = user_proxy.initiate_chat(
    underwriter,
    message=prompt,
)

print(chat_result.summary)

If you want multi-agent review instead of one assistant doing everything, add specialized agents for fraud and compliance. AutoGen’s GroupChat and GroupChatManager let you route the same case through multiple roles before producing the final memo.

fraud_agent = autogen.AssistantAgent(
    name="fraud_checker",
    llm_config=llm_config,
    system_message="Review for document inconsistency, income anomalies, identity risk. Return concise findings.",
)

compliance_agent = autogen.AssistantAgent(
    name="compliance_checker",
    llm_config=llm_config,
    system_message="Check lending compliance issues including missing disclosures, restricted jurisdictions, fair lending concerns.",
)

groupchat = autogen.GroupChat(
    agents=[user_proxy, underwriter, fraud_agent, compliance_agent],
    messages=[],
)

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message=prompt)

4) Wrap the output in a deterministic decision layer

Do not let an LLM emit the final credit decision without guardrails. Use deterministic thresholds first; then use AutoGen to explain or escalate.

RuleAction
DTI > max_dtiRefer or decline
LTV > max_ltvRefer
Missing required docsRefer
Restricted stateDecline
Clean policy passApprove candidate

That pattern keeps your model inside a controlled lane. It also makes audits much easier because you can show exactly which rule triggered each outcome.

Production Considerations

  • Deployment
    • Put the agent behind an internal API with request signing and tenant isolation.
    • Keep prompts and outputs versioned so each decision can be reproduced later.
  • Monitoring
    • Track approval rate by segment, manual review rate, override rate, hallucination incidents, and missing-document frequency.
    • Alert on drift in DTI/LTV distributions or sudden changes in referral patterns.
  • Guardrails
    • Enforce hard rules outside the model for restricted geographies, minimum documentation sets, adverse-action triggers, and maximum exposure limits.
    • Require structured JSON output and reject malformed responses.
  • Compliance and residency
    • Store PII in-region if your lending program requires it.
    • Redact sensitive fields before sending context to any external model endpoint when policy allows it.

Common Pitfalls

  • Letting the model make unbounded decisions

    • Fix it by separating deterministic policy checks from narrative reasoning.
    • The model should explain or classify within rules already enforced by code.
  • Skipping audit logs

    • Fix it by storing input payloads, derived metrics like DTI/LTV, prompt version, model version, and final recommendation.
    • Lending teams need full traceability for disputes and regulatory reviews.
  • Mixing compliance logic into prompt text only

    • Fix it by codifying jurisdiction rules in Python first.
    • Prompts are brittle; hard rules belong in application code where they can be tested.

A good underwriting agent does three things well: it standardizes review quality across cases; it escalates edge cases instead of guessing; and it leaves behind an audit trail that stands up to legal and compliance scrutiny. If you build it that way from day one, AutoGen becomes a coordination layer—not a credit risk engine with no brakes.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides