How to Build a loan approval Agent Using AutoGen in Python for pension funds

By Cyprian AaronsUpdated 2026-04-21
loan-approvalautogenpythonpension-funds

A loan approval agent for pension funds takes a loan application, checks it against fund policy, risk limits, and compliance rules, then produces a decision recommendation with an audit trail. It matters because pension capital is governed by strict fiduciary duties, so every approval needs to be explainable, policy-bound, and defensible under review.

Architecture

  • Application intake service

    • Receives borrower data, collateral details, requested amount, term, jurisdiction, and purpose.
    • Normalizes the payload before it reaches the agent layer.
  • Policy retrieval layer

    • Pulls pension-fund-specific lending rules from a controlled source.
    • Includes concentration limits, sector exclusions, duration limits, and jurisdiction constraints.
  • AutoGen agent team

    • A loan analyst agent evaluates financials and eligibility.
    • A compliance agent checks regulatory and fiduciary constraints.
    • A risk agent scores default risk and exposure.
    • A supervisor aggregates results into a final recommendation.
  • Decision store and audit log

    • Persists prompts, tool outputs, intermediate reasoning summaries, and final decisions.
    • Required for internal audit, model risk management, and regulator review.
  • Human approval workflow

    • Routes edge cases or policy breaches to an underwriter or investment committee.
    • Prevents fully automated approvals where fiduciary policy requires sign-off.

Implementation

  1. Install AutoGen and define the agents

    Use the autogen-agentchat package and create specialized agents with explicit system messages. For pension funds, keep the instructions narrow: no free-form lending advice outside policy.

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    api_key="YOUR_OPENAI_API_KEY"
)

loan_analyst = AssistantAgent(
    name="loan_analyst",
    model_client=model_client,
    system_message=(
        "You analyze loan applications for a pension fund. "
        "Focus on borrower cash flow, collateral quality, DSCR, LTV, and term fit. "
        "Return concise findings and never approve outside policy."
    ),
)

compliance_agent = AssistantAgent(
    name="compliance_agent",
    model_client=model_client,
    system_message=(
        "You check pension fund lending compliance. "
        "Validate jurisdiction, prohibited sectors, concentration limits, "
        "documentation completeness, and fiduciary constraints. "
        "Return only compliance findings."
    ),
)
  1. Create a supervisor that orchestrates the review

    For a production pattern, don’t let one model make the whole decision. Run each specialist independently and have a supervisor synthesize the output into an approval recommendation.

import asyncio
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.teams import RoundRobinGroupChat

async def run_review(application: dict):
    prompt = f"""
Loan application:
Borrower: {application['borrower']}
Amount: {application['amount']}
Term months: {application['term_months']}
Jurisdiction: {application['jurisdiction']}
Sector: {application['sector']}
DSCR: {application['dscr']}
LTV: {application['ltv']}
Collateral: {application['collateral']}
Purpose: {application['purpose']}

Evaluate against pension fund lending policy.
"""

    team = RoundRobinGroupChat(
        participants=[loan_analyst, compliance_agent],
        max_turns=4,
    )

    result = await team.run(task=TextMessage(content=prompt))
    return result

if __name__ == "__main__":
    application = {
        "borrower": "Acme Logistics Ltd",
        "amount": "$2.5M",
        "term_months": 36,
        "jurisdiction": "South Africa",
        "sector": "Transport",
        "dscr": 1.45,
        "ltv": 0.62,
        "collateral": "Warehouse property",
        "purpose": "Fleet expansion",
    }

    output = asyncio.run(run_review(application))
    print(output)
  1. Add a deterministic decision rule after the agents respond

    The model should recommend; your code should decide. That keeps approvals consistent and auditable.

def decide(review_text: str) -> str:
    text = review_text.lower()

    if "non-compliant" in text or "breach" in text:
        return "REJECT"
    if "needs human review" in text or "escalate" in text:
        return "ESCALATE"
    if "approved" in text or "within policy" in text:
        return "APPROVE"
    return "ESCALATE"

# Example usage:
# final_decision = decide(str(output))
# print(final_decision)
  1. Persist the full record for audit

    Pension funds need traceability across every decision path. Store the input payload, agent outputs, final decision rule used, timestamp, and reviewer identity in an immutable log or WORM-backed store.

from datetime import datetime
import json

def audit_record(application: dict, agent_output: str, decision: str) -> dict:
    return {
        "timestamp_utc": datetime.utcnow().isoformat(),
        "application": application,
        "agent_output": str(agent_output),
        "decision": decision,
        "policy_version": "2026-01-lending-policy-v3",
        "reviewer": "autogen-supervisor",
    }

record = audit_record(application, output.messages[-1].content if output.messages else "", decide(str(output)))
print(json.dumps(record, indent=2))

Production Considerations

  • Data residency

    • Keep borrower PII and fund data inside approved regions.
    • If your pension fund operates under local residency rules, do not send raw documents to external services without approved processing boundaries.
  • Compliance controls

    • Encode hard limits outside the model: max exposure per borrower group, prohibited industries, minimum DSCR/LTV thresholds.
    • Treat the LLM as a reasoning layer; treat policy enforcement as deterministic code.
  • Monitoring

    • Track approval rates by segment, escalation rates, false positives on compliance flags, and drift in recommended terms.
    • Log prompt versions and model versions so you can explain changes in behavior during audits.
  • Human-in-the-loop escalation

    • Route borderline cases to credit committee members when policy is ambiguous or missing data exists.
    • Require manual sign-off for politically exposed persons (PEPs), cross-border exposures, or unusual collateral structures.

Common Pitfalls

  • Letting the model make the final credit decision

    • Avoid this by separating recommendation from enforcement.
    • Use AutoGen for analysis; use code for policy gates.
  • Skipping auditability

    • If you don’t store prompts, outputs, policy versioning, and reviewer actions, you can’t defend the decision later.
    • Build logging from day one.
  • Using generic prompts instead of pension-specific policy language

    • Pension funds are not retail lenders.
    • Add explicit rules for fiduciary duty, concentration risk, prohibited assets/sectors, jurisdiction restrictions, and committee escalation thresholds.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides