How to Build a underwriting Agent Using AutoGen in TypeScript for retail banking
A underwriting agent for retail banking takes a loan application, pulls the right customer and product data, checks policy and compliance rules, asks for missing information, and produces a decision recommendation with an audit trail. It matters because retail banking underwriting is high-volume, time-sensitive, and heavily regulated; if you get the workflow wrong, you create credit risk, compliance exposure, and slow customer onboarding.
Architecture
- •
User-facing intake service
- •Accepts the loan application payload from your channel app or backend.
- •Normalizes applicant data into a strict schema before it reaches the agent.
- •
AutoGen orchestrator
- •Coordinates the underwriting conversation.
- •In TypeScript, this is typically a
GroupChatplusGroupChatManager, or a singleAssistantAgentwhen the workflow is simple.
- •
Policy and eligibility tools
- •Deterministic functions for DTI checks, minimum income thresholds, product eligibility, KYC status, and fraud flags.
- •These should be plain TypeScript functions exposed to the agent as tools.
- •
Data access layer
- •Fetches bureau data, account history, deposit balances, payroll signals, and internal risk features.
- •Must enforce row-level access controls and residency constraints.
- •
Decision logger
- •Writes every tool call, model response, and final recommendation to an immutable audit store.
- •Needed for model governance, dispute handling, and regulatory review.
- •
Human review handoff
- •Routes edge cases to an underwriter when confidence is low or policy exceptions are detected.
- •This is not optional in retail banking.
Implementation
1) Install AutoGen and define your underwriting inputs
Start with strict types. Underwriting agents fail when they receive messy JSON and try to infer meaning from it.
npm install @autogen/core @autogen/openai zod
import { z } from "zod";
export const LoanApplicationSchema = z.object({
applicationId: z.string(),
customerId: z.string(),
requestedAmount: z.number().positive(),
annualIncome: z.number().positive(),
monthlyDebtPayments: z.number().nonnegative(),
ficoScore: z.number().int().min(300).max(850),
kycPassed: z.boolean(),
countryCode: z.string().length(2),
});
export type LoanApplication = z.infer<typeof LoanApplicationSchema>;
export function debtToIncome(app: LoanApplication): number {
return app.monthlyDebtPayments / (app.annualIncome / 12);
}
2) Build deterministic policy tools
Keep the rules outside the model. The agent should explain decisions; it should not invent credit policy.
import { LoanApplication } from "./schema";
export function checkEligibility(app: LoanApplication) {
const dti = app.monthlyDebtPayments / (app.annualIncome / 12);
if (!app.kycPassed) {
return { eligible: false, reason: "KYC_FAILED" };
}
if (app.ficoScore < 620) {
return { eligible: false, reason: "FICO_BELOW_MINIMUM" };
}
if (dti > 0.45) {
return { eligible: false, reason: "DTI_ABOVE_LIMIT" };
}
return { eligible: true, reason: "PASS" };
}
3) Wire an AutoGen assistant with tools
This pattern uses AssistantAgent plus tool functions that the model can call. In underwriting, that gives you structured reasoning without letting the model decide policy by itself.
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { checkEligibility } from "./policy";
import { LoanApplicationSchema } from "./schema";
const modelClient = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
});
const underwritingAgent = new AssistantAgent({
name: "underwriting_agent",
modelClient,
systemMessage:
"You are a retail banking underwriting assistant. Use only provided tools for policy checks. " +
"Do not approve applications outside policy. Always return a concise decision summary with reasons.",
});
underwritingAgent.registerTool(
{
name: "checkEligibility",
description: "Evaluate underwriting eligibility using bank policy.",
parameters: {
type: "object",
properties: {
applicationId: { type: "string" },
customerId: { type: "string" },
requestedAmount: { type: "number" },
annualIncome: { type: "number" },
monthlyDebtPayments: { type: "number" },
ficoScore: { type: "number" },
kycPassed: { type: "boolean" },
countryCode: { type: "string" },
},
required: [
"applicationId",
"customerId",
"requestedAmount",
"annualIncome",
"monthlyDebtPayments",
"ficoScore",
"kycPassed",
"countryCode",
],
additionalProperties: false,
},
execute(args) {
const app = LoanApplicationSchema.parse(args);
return checkEligibility(app);
},
}
);
4) Run the evaluation loop and persist an audit record
For production use, capture both the input and the final output. In banking, “the model said so” is not evidence.
async function underwrite(applicationPayload: unknown) {
const application = LoanApplicationSchema.parse(applicationPayload);
const result = await underwritingAgent.run([
{
roleMessageType:
// If your installed version uses different message classes,
// keep this as your application-layer message adapter.
// The core pattern is still AssistantAgent.run([...]).
undefined as never,
content:
`Evaluate this loan application using policy tools only:\n${JSON.stringify(
application,
null,
2
)}`,
} as any,
]);
await saveAuditLog({
applicationId: application.applicationId,
inputSnapshot:
JSON.stringify(application),
agentOutput:
JSON.stringify(result),
timestampUtc:
new Date().toISOString(),
channel:
"retail-banking-underwriting",
residencyRegion:
process.env.DATA_REGION ?? "us-east-1",
});
return result;
}
async function saveAuditLog(record: unknown) {
console.log("AUDIT", record);
}
If your workflow needs multiple roles — for example a policy checker plus a fraud analyst — move to GroupChat and GroupChatManager. That gives you explicit orchestration between agents instead of one large prompt trying to do everything.
Production Considerations
- •
Data residency
- •Keep PII and bureau data in-region.
- •Do not send raw sensitive fields to external endpoints unless your legal/compliance team has approved that path.
- •
Auditability
- •Persist input payloads, tool calls, intermediate decisions, final outputs, model version, prompt version, and timestamp.
- •Store immutable logs in WORM-capable storage if your control framework requires it.
- •
Guardrails
- •Enforce hard policy checks in code before any approval recommendation leaves the service.
- •Use allowlisted tools only; do not expose arbitrary database queries or free-form HTTP calls to the agent.
- •
Monitoring
- •Track approval rate by segment, override rate by human reviewers, tool failure rate, latency p95/p99, and drift in key features like FICO distribution or DTI bands.
- •Alert when the agent starts producing unsupported rationales or when its recommendation distribution shifts materially.
Common Pitfalls
- •
Letting the LLM make credit decisions directly
The model should recommend based on deterministic rules; it should not invent thresholds or override policy. Put approval logic in code and keep the agent in an explanation-and-orchestration role.
- •
Skipping schema validation
Retail banking inputs are messy across channels. Validate every request with
zodor equivalent before invoking AutoGen so bad data does not become bad recommendations. - •
Ignoring exception handling paths
Not every case should be auto-approved or auto-declined. Route thin-file customers, borderline DTI cases, missing KYC records, and suspected fraud to human review with full context attached.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit