How to Build a loan approval Agent Using AutoGen in Python for lending
A loan approval agent automates the first pass of lending decisions: it collects applicant data, checks policy rules, scores risk, and produces a decision recommendation with an audit trail. In lending, that matters because you need speed without losing control over compliance, explainability, and consistency across applications.
Architecture
- •
Applicant intake service
- •Receives structured loan application data from your API or CRM.
- •Normalizes fields like income, debt, employment status, requested amount, and jurisdiction.
- •
Policy engine
- •Encodes hard rules such as minimum credit score, debt-to-income thresholds, residency constraints, and product eligibility.
- •Returns deterministic pass/fail outcomes before any model-based reasoning.
- •
AutoGen decision team
- •Uses
AssistantAgentinstances to evaluate risk, explain exceptions, and draft a recommendation. - •Keeps the logic modular so one agent can focus on underwriting while another checks compliance.
- •Uses
- •
Human review gate
- •Escalates borderline cases to an underwriter.
- •Prevents fully automated approval for cases that violate policy or exceed confidence thresholds.
- •
Audit logger
- •Stores inputs, outputs, tool calls, and final rationale.
- •Supports model governance, regulatory review, and dispute resolution.
Implementation
1) Define the decision workflow
Use AutoGen to create specialized agents. One agent acts as the underwriter analyst, another as a compliance checker. For lending systems, keep the final decision deterministic: the LLM recommends; your code decides.
from autogen import AssistantAgent
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
],
"temperature": 0,
}
underwriter = AssistantAgent(
name="underwriter",
llm_config=llm_config,
system_message=(
"You are a lending underwriter. "
"Assess loan applications using provided facts only. "
"Return concise JSON with keys: risk_level, recommendation, rationale."
),
)
compliance = AssistantAgent(
name="compliance",
llm_config=llm_config,
system_message=(
"You are a lending compliance reviewer. "
"Check for policy violations, fair lending concerns, and missing required fields. "
"Return concise JSON with keys: status, issues."
),
)
2) Add a deterministic policy gate
This is where you enforce hard constraints like minimum credit score or maximum DTI. Do not ask the model to enforce rules you can encode in Python.
def policy_gate(app):
issues = []
if app["credit_score"] < 620:
issues.append("credit_score_below_minimum")
if app["dti"] > 0.43:
issues.append("debt_to_income_too_high")
if app["country"] not in {"US", "CA"}:
issues.append("unsupported_jurisdiction")
return {
"approved_for_llm_review": len(issues) == 0,
"issues": issues,
}
3) Run the agents and combine their outputs
Use initiate_chat() to collect structured recommendations. In production you would parse JSON strictly; here we keep the pattern simple but real.
import os
import json
from autogen import AssistantAgent
application = {
"applicant_id": "A-10422",
"loan_amount": 25000,
"annual_income": 92000,
"dti": 0.31,
"credit_score": 701,
"employment_years": 4,
"country": "US",
}
gate = policy_gate(application)
if not gate["approved_for_llm_review"]:
final_decision = {
"decision": "manual_review",
"reason": gate["issues"],
}
else:
prompt = f"""
Applicant data:
{json.dumps(application)}
Task:
1. Assess credit risk.
2. Identify any compliance concerns.
3. Recommend approve / deny / manual_review.
Return JSON only.
"""
underwriter_reply = underwriter.generate_reply(messages=[{"role": "user", "content": prompt}])
compliance_reply = compliance.generate_reply(messages=[{"role": "user", "content": prompt}])
final_decision = {
"decision_source": "agent_recommendation",
"underwriter_output": underwriter_reply,
"compliance_output": compliance_reply,
"decision": "manual_review", # replace with your own deterministic mapper
}
print(final_decision)
4) Persist an audit record
For lending workflows, every decision needs traceability. Store the raw input, policy result, agent output, and final decision in your database or log pipeline.
from datetime import datetime
import uuid
def build_audit_record(application, gate_result, agent_outputs, final_decision):
return {
"audit_id": str(uuid.uuid4()),
"timestamp_utc": datetime.utcnow().isoformat(),
"application_id": application["applicant_id"],
"input_snapshot": application,
"policy_result": gate_result,
"agent_outputs": agent_outputs,
"final_decision": final_decision,
}
Production Considerations
- •
Deploy the LLM behind a controlled service boundary
- •Keep PII out of prompts unless absolutely required.
- •Use private networking and region-specific deployments when data residency applies.
- •
Monitor decision drift and exception rates
- •Track approval rate by segment, manual review rate, and override rate by underwriters.
- •Watch for changes in distribution across protected classes or proxy variables.
- •
Add guardrails before model calls
- •Enforce schema validation on every application payload.
- •Block missing KYC fields, unsupported jurisdictions, and stale bureau data before invoking AutoGen.
- •
Keep humans in the loop for adverse actions
- •If the outcome is deny or manual review, generate reason codes from deterministic rules.
- •Do not let the model invent denial reasons that cannot be traced back to policy.
Common Pitfalls
- •
Letting the model make final credit decisions
- •Avoid this by using AutoGen for analysis only.
- •Final approval should come from explicit code paths tied to underwriting policy.
- •
Sending raw sensitive data into prompts
- •Mask account numbers, government IDs, and unnecessary PII.
- •Only pass fields needed for underwriting and compliance review.
- •
Skipping auditability
- •If you do not persist prompts, outputs, and rule evaluations, you cannot defend the decision later.
- •Log every step with timestamps and immutable IDs.
- •
Using one generic agent for everything
- •Separate underwriting from compliance from escalation handling.
- •That makes prompts smaller, behavior more predictable, and reviews easier during audits.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit