How to Build a loan approval Agent Using AutoGen in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
loan-approvalautogentypescriptwealth-management

A loan approval agent in wealth management takes a client’s application, checks the portfolio and relationship context, gathers the right internal data, and produces a decision recommendation with an audit trail. It matters because high-net-worth lending is not just about credit score; it’s about collateral quality, concentration risk, suitability, compliance, and whether the decision can be defended later to risk, compliance, and regulators.

Architecture

  • Client intake service

    • Accepts loan requests from advisor portals or internal ops tools.
    • Normalizes inputs like requested amount, collateral type, jurisdiction, and purpose of funds.
  • Policy and eligibility engine

    • Encodes lending rules: LTV limits, minimum liquidity thresholds, concentration caps, and restricted asset classes.
    • Keeps hard rejects outside the agent loop.
  • AutoGen multi-agent workflow

    • One agent gathers facts.
    • One agent evaluates risk.
    • One agent checks compliance.
    • One agent produces the final recommendation.
  • Data access layer

    • Pulls portfolio positions, account history, KYC status, and prior exceptions.
    • Enforces data residency and least-privilege access.
  • Decision ledger

    • Stores prompts, tool outputs, model responses, timestamps, and final recommendation.
    • Supports auditability for model risk management and internal review.

Implementation

1) Install AutoGen for TypeScript and define your agents

Use AutoGen’s TypeScript package and create separate agents for retrieval, risk analysis, compliance review, and final decisioning. For wealth management workflows, keep the “decision” agent narrow: it should synthesize evidence, not invent policy.

npm install @autogenai/autogen openai zod
import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
});

const sharedModelClient = {
  create: async (params: any) => {
    const response = await client.chat.completions.create({
      model: "gpt-4o-mini",
      messages: params.messages,
      temperature: params.temperature ?? 0,
    });

    return {
      content: response.choices[0].message.content ?? "",
      usage: response.usage,
    };
  },
};

export const intakeAgent = new AssistantAgent({
  name: "intake_agent",
  systemMessage:
    "Extract required loan facts from client data. Do not make a decision.",
  modelClient: sharedModelClient as any,
});

export const riskAgent = new AssistantAgent({
  name: "risk_agent",
  systemMessage:
    "Assess credit and collateral risk using provided facts. Return structured findings only.",
  modelClient: sharedModelClient as any,
});

export const complianceAgent = new AssistantAgent({
  name: "compliance_agent",
  systemMessage:
    "Check wealth-management lending compliance issues including suitability, KYC/AML flags, jurisdiction constraints, and restricted assets.",
  modelClient: sharedModelClient as any,
});

export const decisionAgent = new AssistantAgent({
  name: "decision_agent",
  systemMessage:
    "Combine findings into approve/decline/refer with reasons grounded only in supplied evidence.",
  modelClient: sharedModelClient as any,
});

2) Add deterministic pre-checks before the LLM runs

This is where most teams get it wrong. Hard policy checks must happen before AutoGen sees the case so you don’t waste tokens on applications that are clearly out of bounds.

type LoanApplication = {
  clientId: string;
  amount: number;
  jurisdiction: string;
  collateralValue: number;
  collateralType: string;
};

function precheck(app: LoanApplication) {
  const ltv = app.amount / app.collateralValue;

  if (app.amount > app.collateralValue * 0.6) {
    return { allowed: false, reason: `LTV too high (${ltv.toFixed(2)})` };
    }

  if (["sanctioned", "restricted"].includes(app.collateralType)) {
    return { allowed: false, reason: "Restricted collateral type" };
  }

  if (!["US", "UK", "EU"].includes(app.jurisdiction)) {
    return { allowed: false, reason: "Unsupported jurisdiction" };
  }

  return { allowed: true };
}

3) Orchestrate the agents with a single approval flow

The pattern below keeps each agent focused. The intake agent extracts facts from structured input; the other agents review those facts; then the decision agent returns a final recommendation that you can persist to your audit store.

import {
	AssistantAgent,
} from "@autogenai/autogen";

async function runLoanReview(applicantData: string) {
	const intakeResult = await intakeAgent.run({
		task:
			`Extract loan-relevant facts from this application JSON:\n${applicantData}`,
	});

	const riskResult = await riskAgent.run({
		task:
			`Assess risk using these extracted facts:\n${intakeResult.output}`,
	});

	const complianceResult = await complianceAgent.run({
		task:
			`Assess compliance using these extracted facts:\n${intakeResult.output}\nRisk notes:\n${riskResult.output}`,
	});

	const decisionResult = await decisionAgent.run({
		task:
			`Make a recommendation using only this evidence:\nFacts:\n${intakeResult.output}\nRisk:\n${riskResult.output}\nCompliance:\n${complianceResult.output}\nReturn one of APPROVE | DECLINE | REFER with reasons.`,
	});

	return {
		facts: intakeResult.output,
		risk: riskResult.output,
		compliance: complianceResult.output,
		recommendation: decisionResult.output,
	};
}

4) Wrap it in an API endpoint with audit logging

In production you want every decision traceable. Persist raw inputs, extracted facts, model outputs, versioned prompts, and the final recommendation in an immutable store.

import express from "express";

const app = express();
app.use(express.json());

app.post("/loan-review", async (req, res) => {
	const appData = req.body as LoanApplication;

	const gate = precheck(appData);
	if (!gate.allowed) {
		return res.status(400).json({ status: "DECLINE", reason: gate.reason });
	}

	const result = await runLoanReview(JSON.stringify(appData));

	await saveAuditRecord({
		clientId: appData.clientId,
		inputs: appData,
		result,
		modelVersion: "gpt-4o-mini",
		policyVersion: "2026-04",
		timestampUtc: new Date().toISOString(),
	});

	res.json(result);
});

Production Considerations

  • Deploy in-region

    • Keep client PII and portfolio data inside approved regions.
    • If you serve EU clients or regulated entities there, enforce EU-only storage and inference routing.
  • Log for auditability

    • Store prompt versions, tool outputs, model responses, reviewer overrides, and timestamps.
    • Make logs immutable or append-only so compliance can reconstruct decisions.
  • Add guardrails around recommendations

    • The agent should never override hard policy rules.
    • Require human review for borderline cases like concentrated collateral positions or politically exposed persons.
  • Monitor drift by segment

    • Track approval rates by jurisdiction, advisor desk, asset class backing the loan, and exception category.
    • Watch for changes after prompt updates or model upgrades.

Common Pitfalls

  1. Letting the LLM decide policy

    • Don’t ask the model to infer lending thresholds.
    • Encode LTV limits, exposure caps, and restricted asset lists in deterministic code.
  2. Skipping compliance context

    • A technically sound credit answer can still be unusable if it ignores KYC status or suitability constraints.
    • Feed the compliance agent explicit fields for residency, sanctions screening state, account type, and source-of-funds flags.
  3. No audit trail

    • If you cannot explain why a case was approved or declined six months later, the workflow is not production-ready.
    • Persist raw inputs plus every intermediate output from AssistantAgent.run() with versioned prompts and policy snapshots.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides