How to Build a underwriting Agent Using AutoGen in TypeScript for fintech
An underwriting agent in fintech takes a loan, card, or merchant application, gathers the relevant facts, checks policy rules, and produces a decision recommendation with a traceable rationale. It matters because underwriting is where speed, consistency, compliance, and loss control collide; if you automate it badly, you either leak risk or create an audit problem.
Architecture
- •
Application intake service
- •Accepts applicant data from your API or queue.
- •Normalizes fields like income, liabilities, business age, bank balance, and jurisdiction.
- •
Policy/rules layer
- •Encodes hard constraints such as minimum income thresholds, prohibited geographies, KYC/AML flags, and debt-to-income limits.
- •Keeps deterministic decisions out of the LLM.
- •
AutoGen agent group
- •One agent summarizes the application.
- •One agent evaluates policy fit.
- •One agent drafts the underwriting memo.
- •A manager orchestrates the conversation and stops when enough evidence exists.
- •
Tooling layer
- •Functions for pulling bureau data, transaction history, sanctions results, and internal risk scores.
- •Tools should be read-only and audited.
- •
Decision store
- •Persists final recommendation, reasons, model version, prompt version, and evidence hashes.
- •Required for auditability and dispute handling.
Implementation
- •Install AutoGen for TypeScript and define your domain types
Use the TypeScript AutoGen package that exposes AssistantAgent, UserProxyAgent, GroupChat, and GroupChatManager. Keep your underwriting inputs typed so you can validate before the LLM sees anything.
import { AssistantAgent, UserProxyAgent, GroupChat } from "autogen";
type UnderwritingApplication = {
applicantId: string;
country: string;
requestedAmount: number;
annualIncome: number;
existingDebt: number;
businessAgeMonths?: number;
kycPassed: boolean;
amlFlag: boolean;
};
const application: UnderwritingApplication = {
applicantId: "app_123",
country: "KE",
requestedAmount: 250000,
annualIncome: 1200000,
existingDebt: 300000,
businessAgeMonths: 18,
kycPassed: true,
amlFlag: false
};
- •Create agents with narrow responsibilities
Do not build one “smart” agent that does everything. Split summary, policy review, and memo writing so you can inspect each step during audits.
const summaryAgent = new AssistantAgent({
name: "summary_agent",
systemMessage:
"You summarize underwriting applications into concise factual bullet points. Do not decide approval."
});
const policyAgent = new AssistantAgent({
name: "policy_agent",
systemMessage:
"You evaluate the application against underwriting policy. Return only risk factors and policy breaches."
});
const memoAgent = new AssistantAgent({
name: "memo_agent",
systemMessage:
"You draft an underwriting memo using only provided facts. Include decision recommendation and rationale."
});
const operator = new UserProxyAgent({
name: "operator",
humanInputMode: "NEVER"
});
- •Wire a group chat with deterministic pre-checks
Run hard rules before the LLM. If AML fails or KYC is missing, short-circuit immediately. That keeps regulated decisions from depending on model output.
function precheck(app: UnderwritingApplication) {
if (!app.kycPassed) return { decision: "DECLINE", reason: "KYC failed" };
if (app.amlFlag) return { decision: "DECLINE", reason: "AML flag present" };
if (app.requestedAmount > app.annualIncome * 2) {
return { decision: "REVIEW", reason: "Requested amount exceeds policy threshold" };
}
return null;
}
const blocked = precheck(application);
if (blocked) {
console.log(JSON.stringify(blocked));
} else {
const chat = new GroupChat({
agents: [operator as any, summaryAgent as any, policyAgent as any, memoAgent as any],
messages: [],
maxRounds: 6
});
console.log("GroupChat initialized:", chat instanceof GroupChat);
}
- •Generate an auditable underwriting memo
The output should be structured JSON or a fixed template so downstream systems can store it without parsing free-form prose. In production you want a stable contract for approval workflows and case management.
async function runUnderwritingMemo() {
const prompt = `
Application:
${JSON.stringify(application, null, 2)}
Tasks:
1) Summarize facts.
2) Check policy concerns.
3) Draft final recommendation as APPROVE / REVIEW / DECLINE.
4) Include reasons tied to specific fields.
`;
const result = await memoAgent.generateReply([
{ role: "user", content: prompt }
]);
console.log(result);
}
runUnderwritingMemo().catch(console.error);
A practical pattern is to keep the final recommendation outside the model when possible:
| Layer | Responsibility | Trust level |
|---|---|---|
| Precheck | Hard compliance gates | Highest |
| Policy agent | Explain rule impacts | Medium |
| Memo agent | Human-readable rationale | Lower |
| Case manager | Final persisted outcome | Highest |
Production Considerations
- •
Keep regulated decisions deterministic
- •Use rules for KYC/AML blocks, jurisdiction restrictions, maximum exposure limits, and mandatory document checks.
- •Let the LLM explain decisions; do not let it invent them.
- •
Log everything needed for audit
- •Persist input payload hashes, prompt versions, model version, tool calls, timestamps, and final recommendation.
- •Store evidence references rather than raw sensitive data where possible.
- •
Respect data residency
- •Route EU customer data to EU-hosted infrastructure only.
- •Avoid sending PII to third-party tools unless your DPA and regional controls allow it.
- •
Add monitoring around drift and overrides
- •Track approval rate by segment, manual override rate by underwriter team, false decline rate, and policy breach counts.
- •Alert when model outputs diverge from historical baselines.
Common Pitfalls
- •
Using one agent for everything
This makes debugging impossible and increases hallucination risk. Split summarization, policy evaluation, and memo generation into separate agents with narrow prompts.
- •
Letting the LLM make compliance calls
If KYC fails or sanctions hit returns true, do not ask the model what to do next. Hard-stop in code before any generation happens.
- •
Storing raw PII in prompts and logs
That creates unnecessary regulatory exposure. Redact sensitive fields before logging and keep full records in encrypted case storage with strict access controls.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit