How to Build a loan approval Agent Using AutoGen in Python for payments

By Cyprian AaronsUpdated 2026-04-21
loan-approvalautogenpythonpayments

A loan approval agent for payments takes a borrower’s application, checks the required data, runs policy and risk checks, and returns a decision path that a human or downstream system can trust. In payments, this matters because loan decisions often sit on the critical path for card issuance, BNPL, merchant financing, or instant disbursement, where latency, compliance, auditability, and data handling are not optional.

Architecture

  • Application intake service

    • Receives borrower data from your payments backend.
    • Normalizes fields like income, KYC status, repayment history, and transaction signals.
  • Policy agent

    • Applies deterministic lending rules.
    • Enforces hard stops like missing KYC, sanctions hits, or failed affordability thresholds.
  • Risk analysis agent

    • Evaluates soft signals such as payment behavior, cash flow stability, and exposure.
    • Produces a structured recommendation with reasons.
  • Compliance reviewer agent

    • Checks outputs for regulatory constraints.
    • Ensures the final decision includes audit-friendly rationale and no prohibited attributes.
  • Decision orchestrator

    • Coordinates agents using AutoGen.
    • Stops on hard failures and escalates ambiguous cases to human review.
  • Audit logger

    • Persists prompts, decisions, model versions, timestamps, and policy outcomes.
    • Supports disputes, internal review, and regulator requests.

Implementation

1) Install AutoGen and define your message schema

For production work, keep the payload structured. Loan decisions need traceability more than clever prompting.

from dataclasses import dataclass
from typing import Literal, Optional

@dataclass
class LoanApplication:
    applicant_id: str
    amount: float
    income_monthly: float
    kyc_passed: bool
    sanctions_clear: bool
    delinquency_30d: int
    country: str
    residency_region: str

@dataclass
class LoanDecision:
    status: Literal["approve", "reject", "review"]
    reason: str
    risk_score: Optional[float] = None

2) Create AutoGen agents with explicit roles

AutoGen’s AssistantAgent works well for role-specific analysis. Use one agent for policy interpretation and another for risk reasoning.

import autogen

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": "YOUR_OPENAI_API_KEY",
        }
    ],
    "temperature": 0,
}

policy_agent = autogen.AssistantAgent(
    name="policy_agent",
    llm_config=llm_config,
    system_message=(
        "You are a lending policy engine. "
        "Return only JSON with fields: status, reason. "
        "Reject if KYC failed or sanctions_clear is false."
    ),
)

risk_agent = autogen.AssistantAgent(
    name="risk_agent",
    llm_config=llm_config,
    system_message=(
        "You assess repayment risk for payment-linked loans. "
        "Return only JSON with fields: status, reason, risk_score. "
        "Use income_monthly vs amount and delinquency_30d."
    ),
)

3) Orchestrate the workflow with GroupChat and GroupChatManager

This pattern keeps the decision path auditable. The orchestrator can inject the application context once and collect structured responses from each agent.

import json

def build_prompt(app: LoanApplication) -> str:
    return json.dumps(app.__dict__)

user_proxy = autogen.UserProxyAgent(
    name="orchestrator",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=0,
)

app = LoanApplication(
    applicant_id="CUST-1009",
    amount=2500.0,
    income_monthly=6000.0,
    kyc_passed=True,
    sanctions_clear=True,
    delinquency_30d=0,
    country="KE",
    residency_region="africa-east-1",
)

groupchat = autogen.GroupChat(
    agents=[user_proxy, policy_agent, risk_agent],
    messages=[],
    max_round=4,
)

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(
    manager,
    message=f"Evaluate this loan application for a payments product:\n{build_prompt(app)}"
)

4) Add deterministic guardrails outside the model

Do not let the LLM make final compliance calls alone. Use code for hard rules and reserve AutoGen for reasoning on borderline cases.

def hard_policy_check(app: LoanApplication) -> LoanDecision:
    if not app.kyc_passed:
        return LoanDecision(status="reject", reason="KYC failed")
    if not app.sanctions_clear:
        return LoanDecision(status="reject", reason="Sanctions screening failed")
    if app.amount > app.income_monthly * 2:
        return LoanDecision(status="review", reason="Requested amount exceeds policy threshold")
    
def finalize_decision(app: LoanApplication) -> LoanDecision:
        return None

decision = hard_policy_check(app)
if decision is None:
   # proceed to agent-based review in your orchestration layer
   pass
else:
   print(decision)

A practical pattern is:

  • Hard-reject in code for compliance failures.
  • Send borderline cases to agents.
  • Require structured JSON output from every agent.
  • Store every intermediate result in your audit log.

Production Considerations

  • Deployment

    • Run the orchestrator as a stateless service behind your payments API gateway.
    • Keep model configuration externalized so you can rotate models without code changes.
    • Pin versions of AutoGen and your LLM provider SDK to avoid behavior drift.
  • Monitoring

    • Track approval rate by region, product type, and channel.
    • Log prompt/response pairs with redaction for PII.
    • Alert on unusual spikes in manual review or rejection rates.
  • Guardrails

    • Enforce deterministic checks for KYC, sanctions, affordability, and jurisdiction rules before any model call.
    • Block sensitive attributes from prompts unless they are explicitly allowed by policy.
    • Require human review when confidence is low or when the applicant is in a restricted jurisdiction.
  • Data residency

    • Route applications to region-specific inference endpoints when required.
    • Never move customer financial data across borders without a legal basis.
    • Keep audit logs in the same residency zone as the source application where regulations demand it.

Common Pitfalls

  • Letting the model decide compliance

    • Mistake: asking the LLM to “approve or reject” without hard-coded rules.
    • Fix: run compliance checks in Python first; use the agent only for explanation and borderline analysis.
  • Sending raw PII into prompts

    • Mistake: including full account numbers, addresses, or national IDs in chat messages.
    • Fix: tokenize or redact sensitive fields before calling AssistantAgent.
  • No audit trail

    • Mistake: storing only the final decision.
    • Fix: persist input payloads, rule outcomes, agent outputs, model version, timestamp, and operator overrides.

A loan approval agent in payments works when it behaves like infrastructure, not like a chatbot. Keep deterministic controls in code, use AutoGen for structured reasoning where it adds value, and design every step so compliance teams can reconstruct why a decision was made.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides