How to Build a loan approval Agent Using AutoGen in TypeScript for insurance
A loan approval agent for insurance automates the first pass on financing requests tied to policies, premiums, or premium financing. It gathers applicant data, checks policy and payment history, scores risk, and produces a decision package that a human underwriter can approve or override. That matters because insurance teams need speed without losing control over compliance, auditability, and jurisdiction-specific rules.
Architecture
- •
Applicant intake service
- •Accepts structured loan request payloads from the policy admin system or portal.
- •Normalizes fields like applicant identity, policy number, premium amount, and requested term.
- •
AutoGen orchestration layer
- •Uses
AssistantAgentfor analysis andUserProxyAgentfor controlled tool execution. - •Coordinates the conversation and keeps the workflow deterministic.
- •Uses
- •
Risk and compliance tools
- •Pulls policy status, claims history, delinquency data, KYC/AML flags, and jurisdiction rules.
- •Returns only the minimum necessary data to the model.
- •
Decision engine
- •Converts agent output into an approvable structure: approve, reject, or escalate.
- •Enforces hard rules before any AI-generated recommendation is accepted.
- •
Audit and evidence store
- •Persists prompts, tool outputs, model responses, timestamps, and final decisions.
- •Supports regulator review and internal model governance.
Implementation
- •
Install AutoGen and define your request shape
Use the TypeScript AutoGen package and keep your domain objects strict. Insurance workflows break fast when you let free-form JSON drift across services.
npm install @autogenai/autogen zodimport { z } from "zod"; export const LoanRequestSchema = z.object({ requestId: z.string(), applicantId: z.string(), policyNumber: z.string(), requestedAmount: z.number().positive(), premiumAmount: z.number().positive(), jurisdiction: z.string(), consentToProcess: z.boolean(), }); export type LoanRequest = z.infer<typeof LoanRequestSchema>; - •
Create tool functions for insurance checks
Keep all sensitive system access behind tools. The model should reason over facts; it should not directly query your core systems.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen"; async function getPolicyStatus(policyNumber: string) { return { policyNumber, active: true, daysPastDue: 0, claimsInLast12Months: 1, }; } async function getComplianceFlags(applicantId: string) { return { applicantId, kycPassed: true, amlWatchlistHit: false, residencyAllowed: true, }; } const assistant = new AssistantAgent({ name: "loan_underwriter", systemMessage: "You assess insurance-related loan applications. Follow jurisdiction rules. Never approve if compliance flags fail.", llmConfig: { model: "gpt-4o-mini", apiKey: process.env.OPENAI_API_KEY!, }, }); const userProxy = new UserProxyAgent({ name: "tool_executor", humanInputMode: "NEVER", maxConsecutiveAutoReply: 3, }); - •
Run a controlled AutoGen conversation
This is the core pattern: send a structured request to the assistant, let it ask for tool-backed facts, then force a final decision in JSON. In production you would wire actual tool registration through your AutoGen runtime; the important part is that the agent never sees raw database access.
import { LoanRequestSchema } from "./schema"; export async function evaluateLoanRequest(input: unknown) { const request = LoanRequestSchema.parse(input); const policy = await getPolicyStatus(request.policyNumber); const compliance = await getComplianceFlags(request.applicantId); const prompt = ` Evaluate this insurance loan application using only the provided facts. Request: ${JSON.stringify(request)} Policy: ${JSON.stringify(policy)} Compliance: ${JSON.stringify(compliance)} Output strict JSON with: { "decision": "approve" | "reject" | "escalate", "reasonCodes": string[], "summary": string, "requiredHumanReviewer": boolean } Rules: - Reject if consentToProcess is false - Reject if kycPassed is false or amlWatchlistHit is true - Escalate if residencyAllowed is false - Escalate if daysPastDue > 30 `; const result = await assistant.generateReply([ { role: "user", content: prompt }, ]); return { requestId: request.requestId, agentResponse: result.content, policy, compliance, }; } - •
Validate and gate the final decision
Never trust model output directly. Parse it, apply deterministic business rules, then persist an audit record with every input used in the decision.
type Decision = { decision: "approve" | "reject" | "escalate"; reasonCodes: string[]; summary: string; requiredHumanReviewer: boolean; }; export function enforceRules( requestConsentToProcess: boolean, policyDaysPastDue: number, kycPassed: boolean, amlWatchlistHit: boolean, residencyAllowed: boolean, decisionText: string ): Decision { const parsed = JSON.parse(decisionText) as Decision; if (!requestConsentToProcess) { return { decision: "reject", reasonCodes: ["NO_CONSENT"], summary: "Applicant did not consent to processing.", requiredHumanReviewer: false, }; } if (!kycPassed || amlWatchlistHit) { return { decision: "reject", reasonCodes: ["COMPLIANCE_FAIL"], summary: "Compliance screening failed.", requiredHumanReviewer: false, }; } if (!residencyAllowed || policyDaysPastDue > 30) { return { decision: "escalate", reasonCodes: !residencyAllowed ? ["DATA_RESIDENCY_REVIEW"] : ["PAST_DUE_REVIEW"], summary: !residencyAllowed ? "Jurisdiction requires manual review." : "Policy delinquency exceeds automated threshold.", requiredHumanReviewer: true, }; } return parsed; }
Production Considerations
- •
Data residency
- •Route EU policyholder data to region-locked inference endpoints.
- •Do not send full PII to external models unless your legal basis and processor agreements are signed off.
- •
Audit trail
- •Store prompt version, tool responses, model version, timestamps, reviewer identity, and final disposition.
- •Regulators will ask why a loan was approved or rejected; “the model said so” is not acceptable.
- •
Guardrails
- •Hard-code rejection conditions for consent failures, AML hits, sanctions hits, and invalid jurisdiction.
- •Use AI for ranking and summarization; use code for eligibility enforcement.
- •
Monitoring
- •Track approval rate by product line, escalation rate by region, override rate by underwriters, and hallucination incidents.
- •Alert when outputs drift from historical patterns or when one jurisdiction suddenly spikes in escalations.
Common Pitfalls
- •
Letting the model make final eligibility decisions
This is the fastest way to create compliance debt. Keep approvals bounded by deterministic rules in code; use AutoGen for analysis and explanation only.
- •
Sending raw customer records into prompts
Insurance data minimization matters. Pass only the fields needed for the task and redact identifiers where possible.
- •
Skipping human review paths
Some cases must escalate by design: missing consent, cross-border residency issues, adverse AML signals, or ambiguous policy status. Build those branches into the workflow before you ship anything to production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit