How to Build a loan approval Agent Using AutoGen in Python for retail banking
A loan approval agent automates the first pass of retail credit decisions: it gathers applicant data, checks policy rules, evaluates affordability and risk, and returns a recommendation with an audit trail. In retail banking, this matters because you want faster turnaround for customers without turning your credit policy into a black box.
Architecture
A production loan approval agent in AutoGen needs these components:
- •
Applicant intake
- •Normalizes application data from web forms, CRM, or LOS systems.
- •Converts raw input into a structured case object the agents can reason over.
- •
Policy/rules agent
- •Applies hard constraints like minimum income, debt-to-income thresholds, employment status, and product eligibility.
- •Rejects cases that violate policy before any model-based reasoning.
- •
Risk analysis agent
- •Reviews affordability, exposure, and basic fraud signals.
- •Produces a recommendation with explicit reasons tied to inputs.
- •
Compliance reviewer agent
- •Checks for adverse action language, fair lending concerns, and prohibited attributes.
- •Ensures the decision output is explainable and reviewable by humans.
- •
Decision orchestrator
- •Coordinates the agents in sequence.
- •Stops on hard failures and routes borderline cases to manual review.
- •
Audit/logging layer
- •Stores prompts, tool calls, outputs, and final decisions.
- •Supports model governance, dispute handling, and regulator requests.
Implementation
1) Define the case payload and the agents
AutoGen’s AssistantAgent works well for each specialist role. Keep the prompt narrow so each agent does one job.
from autogen import AssistantAgent, UserProxyAgent
llm_config = {
"model": "gpt-4o-mini",
"api_key": "YOUR_OPENAI_API_KEY",
"temperature": 0,
}
policy_agent = AssistantAgent(
name="policy_agent",
llm_config=llm_config,
system_message=(
"You are a retail banking loan policy checker. "
"Evaluate applications only against explicit policy rules. "
"Return JSON with fields: decision, reasons, missing_fields."
),
)
risk_agent = AssistantAgent(
name="risk_agent",
llm_config=llm_config,
system_message=(
"You are a retail banking credit risk analyst. "
"Assess affordability and repayment risk using only provided data. "
"Return JSON with fields: risk_band, rationale, manual_review."
),
)
compliance_agent = AssistantAgent(
name="compliance_agent",
llm_config=llm_config,
system_message=(
"You are a banking compliance reviewer. "
"Check for adverse action clarity, prohibited attributes, and explainability. "
"Return JSON with fields: compliant, issues, remediation."
),
)
user_proxy = UserProxyAgent(
name="orchestrator",
human_input_mode="NEVER",
)
2) Send the application through each stage
For production banking workflows, do not let one model make the whole decision. Chain deterministic policy checks first, then use LLM reasoning for explanation and triage.
application = {
"applicant_id": "A12345",
"monthly_income": 5200,
"monthly_debt": 1800,
"requested_amount": 15000,
"employment_status": "full_time",
"credit_score": 684,
"country": "ZA",
}
policy_prompt = f"""
Application:
{application}
Rules:
- Debt-to-income must be <= 45%
- Employment status must be full_time or permanent
- Credit score must be >= 650
- Country must be ZA for this product
"""
policy_result = user_proxy.initiate_chat(
policy_agent,
message=policy_prompt,
)
risk_prompt = f"""
Application:
{application}
Policy result:
{policy_result.chat_history[-1]['content']}
Provide risk assessment for retail unsecured lending.
"""
risk_result = user_proxy.initiate_chat(
risk_agent,
message=risk_prompt,
)
compliance_prompt = f"""
Application:
{application}
Policy result:
{policy_result.chat_history[-1]['content']}
Risk result:
{risk_result.chat_history[-1]['content']}
Check compliance for retail banking lending decision.
"""
compliance_result = user_proxy.initiate_chat(
compliance_agent,
message=compliance_prompt,
)
3) Add a deterministic decision function
The LLM should recommend; your code should decide. This is where you enforce bank policy and produce a final outcome that can be audited.
def calculate_dti(monthly_debt: float, monthly_income: float) -> float:
return round((monthly_debt / monthly_income) * 100, 2)
def final_decision(app):
dti = calculate_dti(app["monthly_debt"], app["monthly_income"])
if dti > 45:
return {"decision": "decline", "reason": f"DTI {dti}% exceeds threshold"}
if app["employment_status"] not in {"full_time", "permanent"}:
return {"decision": "decline", "reason": "Employment status not eligible"}
if app["credit_score"] < 650:
return {"decision": "decline", "reason": "Credit score below minimum"}
return {"decision": "approve_for_manual_review", "reason": f"Passes initial policy checks; DTI {dti}%"}
decision = final_decision(application)
print(decision)
4) Wrap outputs into an auditable record
Retail banking needs traceability. Store every intermediate output with timestamps and model identifiers so you can reconstruct why a case was approved or declined.
from datetime import datetime
import json
audit_record = {
"timestamp_utc": datetime.utcnow().isoformat(),
"application_id": application["applicant_id"],
"inputs": application,
"policy_output": policy_result.chat_history[-1]["content"],
"risk_output": risk_result.chat_history[-1]["content"],
"compliance_output": compliance_result.chat_history[-1]["content"],
"final_decision": decision,
}
with open(f"audit_{application['applicant_id']}.json", "w") as f:
json.dump(audit_record, f, indent=2)
Production Considerations
- •
Data residency
- •Keep customer PII in-region if your jurisdiction requires it.
- •If you call external LLM APIs, verify where prompts and logs are processed and retained.
- •
Compliance controls
- •Strip prohibited attributes before sending data to agents.
- •Enforce fair lending rules outside the model; do not rely on prompt instructions alone.
- •
Monitoring
- •Track approval rate shifts by product segment, branch, geography, and channel.
- •Alert on drift in DTI distributions, model refusal rates, or spikes in manual review routing.
- •
Human override
- •Route borderline cases to a credit officer.
- •Make sure adverse action reasons are generated from approved reason codes only.
Common Pitfalls
- •
Letting the LLM make the final credit decision
- •Avoid this by keeping approval logic in deterministic Python code.
- •Use AutoGen agents for analysis and explanation, not policy enforcement.
- •
Sending raw PII into every prompt
- •Minimize fields before calling agents.
- •Redact account numbers, national IDs, salary slips, and other unnecessary identifiers.
- •
Ignoring auditability
- •If you cannot reproduce the chain of reasoning later, you do not have a banking-grade workflow.
- •Persist prompts, responses, rule versions, and thresholds per decision.
- •
Using one generic agent for everything
- •Split policy, risk, and compliance into separate agents with narrow instructions.
- •That keeps outputs more stable and easier to validate during model governance reviews.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit