How to Build a loan approval Agent Using AutoGen in TypeScript for investment banking
A loan approval agent in investment banking triages applications, checks them against policy, pulls supporting data, and drafts an approval or rejection recommendation for a human credit officer. It matters because the bank needs faster turnaround on standard deals without losing control over compliance, auditability, and credit risk.
Architecture
- •Orchestrator agent
- •Owns the workflow and decides when to call tools, ask for more data, or stop for human review.
- •Credit policy tool layer
- •Encapsulates underwriting rules: leverage limits, DSCR thresholds, sector exclusions, KYC status, and country risk.
- •Data retrieval layer
- •Pulls borrower financials, CRM notes, sanctions screening results, and internal exposure data from approved systems.
- •Audit logger
- •Persists every prompt, tool call, model output, and final recommendation for model risk management and regulatory review.
- •Human approval gate
- •Forces escalation when confidence is low, policy is ambiguous, or the deal exceeds delegated authority.
- •Model configuration
- •Uses a deterministic LLM setup for repeatable recommendations and lower variance across similar loan cases.
Implementation
1) Install AutoGen and set up your TypeScript project
Use the TypeScript package for AutoGen and keep your runtime environment locked down. In banking environments, I prefer a dedicated service account with restricted network access and no direct internet egress.
npm install @autogenai/autogen openai zod
Create a minimal config with your OpenAI key and any internal endpoints behind your firewall.
import { config } from "dotenv";
config();
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY is required");
}
2) Define the tools that enforce lending policy
The agent should not “reason” its way around underwriting rules. Put policy in code so it is testable, versioned, and auditable.
import { z } from "zod";
type LoanApplication = {
borrower: string;
amount: number;
annualRevenue: number;
ebitda: number;
existingDebt: number;
sector: string;
kycPassed: boolean;
};
export function evaluateCreditPolicy(app: LoanApplication) {
const leverage = app.existingDebt / Math.max(app.ebitda, 1);
const dscr = app.ebitda / Math.max(app.existingDebt * 0.12 + app.amount * 0.12, 1);
const reasons: string[] = [];
if (!app.kycPassed) reasons.push("KYC not passed");
if (app.sector.toLowerCase() === "crypto") reasons.push("Sector excluded");
if (leverage > 4.0) reasons.push(`Leverage too high: ${leverage.toFixed(2)}x`);
if (dscr < 1.25) reasons.push(`DSCR below threshold: ${dscr.toFixed(2)}`);
return {
approvedByPolicy: reasons.length === 0,
leverage,
dscr,
reasons,
};
}
export const LoanApplicationSchema = z.object({
borrower: z.string(),
amount: z.number().positive(),
annualRevenue: z.number().nonnegative(),
ebitda: z.number().nonnegative(),
existingDebt: z.number().nonnegative(),
sector: z.string(),
kycPassed: z.boolean(),
});
3) Build the AutoGen assistant with real tool calling
In AutoGen TS you create an AssistantAgent, attach tools through registerFunction, then run a chat session. The pattern below keeps the LLM focused on summarizing facts while hard rules stay outside the model.
import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";
import { LoanApplicationSchema } from "./schema";
import { evaluateCreditPolicy } from "./policy";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const loanOfficerAgent = new AssistantAgent({
name: "loan_officer_agent",
modelClient: client,
});
loanOfficerAgent.registerFunction(
{
name: "evaluate_credit_policy",
description:
"Evaluate a loan application against bank underwriting policy.",
parameters: LoanApplicationSchema,
execute: async (app) => evaluateCreditPolicy(app),
returnType: "object",
strictMode: true,
}
);
async function run() {
const application = {
borrower: "Northwind Manufacturing Ltd",
amount: 5000000,
annualRevenue: 42000000,
ebitda: 8600000,
existingDebt: 12000000,
sector: "manufacturing",
kycPassed: true,
};
const result = await loanOfficerAgent.run([
{
role: "user",
content:
`Assess this loan application and produce an approval memo only after using the policy tool:\n${JSON.stringify(application)}`,
},
]);
}
// run();
Why this pattern works
| Concern | What to do | Why it matters |
|---|---|---|
| Policy enforcement | Keep thresholds in code | Prevents model drift from changing credit decisions |
| Auditability | Log inputs/outputs/tool results | Supports internal audit and model risk review |
| Human oversight | Escalate edge cases | Required for delegated authority controls |
| Determinism | Use low temperature settings | Reduces inconsistent recommendations |
Step into a full approval flow with escalation
A production loan agent should not auto-approve everything. It should classify the case, generate a memo, then route to a human when risk is elevated or policy flags appear.
type Decision = "APPROVE" | "REVIEW" | "REJECT";
function decide(policyResult: ReturnType<typeof evaluateCreditPolicy>): Decision {
if (!policyResult.approvedByPolicy) return "REJECT";
if (policyResult.dscr < 1.5 || policyResult.leverage > [3].length + ) return "REVIEW"; // replace with your real thresholds
return "APPROVE";
}
Use a real threshold implementation in your codebase; the important part is that the decision boundary stays explicit and testable.
Production Considerations
- •Deploy inside the bank’s approved environment
Build this as an internal service in a private VPC or on-prem cluster. Do not send borrower PII or financial statements to unmanaged endpoints; data residency requirements usually force regional deployment.
- •Log everything needed for audit
Persist prompt text, tool inputs, tool outputs, final decision, model version, timestamp, and user identity. In investment banking you need traceability for compliance teams and post-trade-style review of credit decisions.
- •Add guardrails before any recommendation leaves the service
Block outputs that contain unsupported claims like “guaranteed approval” or “risk-free.” The agent should only produce structured memos grounded in source data and policy outputs.
- •Separate model output from decision authority
The LLM drafts analysis; it does not own credit approval. Final sign-off should remain with a human approver or an explicit rules engine tied to delegated authority limits.
Common Pitfalls
- •Letting the model infer underwriting rules
Don’t ask the agent to “decide based on best judgment.” Put leverage caps, sector exclusions, sanctions checks, and KYC requirements into deterministic functions.
- •Skipping audit metadata
If you cannot reconstruct why a deal was approved or rejected six months later, you have a governance problem. Store tool traces and prompt versions alongside the case record.
- •Ignoring data residency and PII handling
Loan files often contain tax IDs, bank statements, beneficial ownership data, and director information. Mask sensitive fields where possible and keep processing inside approved regions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit