How to Build a loan approval Agent Using AutoGen in TypeScript for lending
A loan approval agent automates the first pass of lending decisions: it gathers applicant data, checks policy rules, evaluates risk signals, and produces a recommendation with an audit trail. For lenders, this matters because it reduces manual review load, standardizes underwriting decisions, and creates a traceable path from application input to approval, decline, or escalation.
Architecture
- •
Application intake service
- •Receives borrower data from your loan origination system.
- •Normalizes fields like income, debt obligations, employment status, and requested amount.
- •
Policy/rules agent
- •Applies hard eligibility checks.
- •Enforces lending constraints such as minimum income thresholds, DTI limits, residency rules, and product-specific exclusions.
- •
Risk analysis agent
- •Summarizes credit profile signals and flags uncertainty.
- •Produces a recommendation instead of making an opaque final decision.
- •
Compliance reviewer agent
- •Checks for fair-lending concerns, missing disclosures, adverse action requirements, and data handling constraints.
- •Ensures the workflow is explainable and auditable.
- •
Supervisor/orchestrator
- •Routes tasks between agents.
- •Stops the process when a hard rule fails or when human review is required.
- •
Audit logger
- •Persists prompts, tool outputs, decisions, timestamps, and model versions.
- •Supports regulatory review and internal model governance.
Implementation
1) Install AutoGen for TypeScript and define your lending input
Use the AutoGen TypeScript package that exposes the same agent patterns you’d expect in production: AssistantAgent, UserProxyAgent, GroupChat, and GroupChatManager.
npm install @autogen/core @autogen/agentchat zod
Define a typed application payload so every downstream step receives structured data instead of free-form text.
import { z } from "zod";
export const LoanApplicationSchema = z.object({
applicantId: z.string(),
country: z.string(),
requestedAmount: z.number().positive(),
annualIncome: z.number().nonnegative(),
monthlyDebt: z.number().nonnegative(),
employmentStatus: z.enum(["employed", "self_employed", "unemployed"]),
creditScore: z.number().min(300).max(850),
});
export type LoanApplication = z.infer<typeof LoanApplicationSchema>;
2) Create specialized agents with explicit roles
Keep each agent narrow. In lending systems, broad “do everything” agents are how you get compliance drift and inconsistent outputs.
import {
AssistantAgent,
UserProxyAgent,
} from "@autogen/agentchat";
const underwritingAgent = new AssistantAgent({
name: "underwriting_agent",
systemMessage:
"You assess loan eligibility using provided policy rules. Return JSON only.",
});
const complianceAgent = new AssistantAgent({
name: "compliance_agent",
systemMessage:
"You review lending decisions for fair lending, auditability, and regulatory concerns. Return JSON only.",
});
const userProxy = new UserProxyAgent({
name: "loan_ops",
});
3) Orchestrate the conversation with a group chat
This is the actual pattern you want in production: one orchestrator plus specialist agents. The orchestrator controls flow; the specialists provide bounded reasoning.
import {
GroupChat,
GroupChatManager,
} from "@autogen/agentchat";
async function runLoanReview(application: LoanApplication) {
const groupChat = new GroupChat({
agents: [userProxy, underwritingAgent, complianceAgent],
messages: [],
maxRounds: 6,
});
const manager = new GroupChatManager({
groupChat,
llmConfig: {
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY!,
temperature: 0,
},
systemMessage:
"Coordinate underwriting and compliance review for a loan application. Enforce JSON-only outputs.",
});
const prompt = `
Review this loan application:
${JSON.stringify(application)}
Rules:
- DTI = monthlyDebt / (annualIncome / 12)
- If DTI > 0.45 => decline
- If creditScore < 620 => manual review
- If country is not supported => decline
- Otherwise approve pending compliance check
`;
await userProxy.initiateChat(manager, prompt);
}
4) Add deterministic policy checks before any model output becomes a decision
Do not let the LLM be the source of truth for hard eligibility logic. Use code for rules; use AutoGen for explanation and exception handling.
function evaluatePolicy(app: LoanApplication) {
const dti = app.monthlyDebt / (app.annualIncome / 12);
if (!["US", "CA", "GB"].includes(app.country)) {
return { decision: "decline", reason: "unsupported_country", dti };
}
if (dti > Number(process.env.MAX_DTI ?? "0.45")) {
return { decision: "decline", reason: "high_dti", dti };
}
if (app.creditScore < Number(process.env.MANUAL_REVIEW_SCORE ?? "620")) {
return { decision: "manual_review", reason: "low_credit_score", dti };
}
return { decision: "approve", reason: "policy_passed", dti };
}
Then combine both layers:
export async function assessLoan(applicationInput: unknown) {
const application = LoanApplicationSchema.parse(applicationInput);
const policyResult = evaluatePolicy(application);
if (policyResult.decision !== "approve") {
return {
...policyResult,
source: "rules_engine",
};
}
await runLoanReview(application);
return {
decision: "approve",
source: "autogen_review",
reason: "passed_policy_and_compliance_review",
};
}
Production Considerations
- •
Keep decisioning split between code and model
- •Hard rules like DTI thresholds, jurisdiction support, KYC completeness, and product eligibility should live in deterministic code.
- •Use AutoGen for summarization, exception handling, and generating analyst-readable rationale.
- •
Log everything needed for audit
- •Persist input payloads, rule outcomes, agent messages, model version, timestamps, and final disposition.
- •In lending audits you need to reconstruct why a borrower was approved or declined without relying on memory or prompt history in an external UI.
- •
Enforce data residency and PII controls
- •Route applications to region-bound inference endpoints where required.
- •Mask SSNs, bank account numbers, tax IDs, and other sensitive identifiers before sending data to the model.
- •
Build human-in-the-loop escalation paths
- •Any borderline case should land in manual review rather than forcing an automated answer.
- •This matters for adverse action notices, fairness reviews, thin-file applicants, and policy exceptions.
Common Pitfalls
- •
Using the LLM as the final underwriter
- •Don’t ask the model to “decide” based on raw applicant text.
- •Put eligibility logic in code first; let AutoGen explain or escalate.
- •
Sending unredacted PII into prompts
- •Avoid passing full identity documents or full account numbers to agents.
- •Redact before inference and store sensitive fields in your secure system of record.
- •
Skipping governance on prompt changes
- •A prompt tweak can change outcomes materially.
- •Version prompts like code, test them against historical applications, and require approval before promotion to production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit