How to Build a loan approval Agent Using AutoGen in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
loan-approvalautogentypescripthealthcare

A healthcare loan approval agent automates the first pass on financing requests for clinics, providers, and patients. It reads structured application data, checks policy rules, asks for missing documents, and produces a decision package that a human underwriter can review. In healthcare, that matters because decisions need to be fast, auditable, and compliant with strict privacy and residency constraints.

Architecture

  • User-facing intake layer

    • Accepts loan requests from care finance portals, internal ops tools, or CRM workflows.
    • Normalizes inputs like applicant type, amount, repayment term, facility location, and supporting documents.
  • Policy and compliance agent

    • Evaluates healthcare-specific rules such as minimum documentation, restricted geographies, and eligibility thresholds.
    • Enforces guardrails for PHI handling, consent checks, and data minimization.
  • Underwriting analysis agent

    • Reviews financials, repayment capacity, historical payment behavior, and risk signals.
    • Produces a structured recommendation: approve, reject, or escalate.
  • Audit and traceability layer

    • Stores every tool call, prompt, response, and decision rationale.
    • Supports internal review, regulator requests, and model governance.
  • Human review handoff

    • Routes borderline cases to an underwriter.
    • Keeps final approval authority with a human for regulated healthcare lending flows.

Implementation

1) Install AutoGen for TypeScript and define your domain types

For TypeScript agents in AutoGen, use the @autogen/agentchat package. Keep your loan application schema strict so the agent never guesses around missing fields.

npm install @autogen/agentchat zod dotenv
import "dotenv/config";
import { z } from "zod";
import { AssistantAgent } from "@autogen/agentchat";

const LoanApplicationSchema = z.object({
  applicantId: z.string(),
  organizationType: z.enum(["clinic", "hospital", "patient", "provider"]),
  requestedAmount: z.number().positive(),
  annualRevenue: z.number().nonnegative().optional(),
  monthlyDebtService: z.number().nonnegative().optional(),
  state: z.string().min(2),
  hasConsentToUseData: z.boolean(),
  documentsProvided: z.array(z.string()),
});

type LoanApplication = z.infer<typeof LoanApplicationSchema>;

2) Create the underwriting agent with a constrained system message

Use AssistantAgent for the reasoning step. The system message should force structured output and prohibit free-form approval when required fields are missing.

const underwritingAgent = new AssistantAgent({
  name: "healthcare_underwriter",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
    temperature: 0,
  },
  systemMessage: `
You are a healthcare loan underwriting assistant.
Rules:
- Never process PHI unless consent is present.
- If required fields are missing, return NEEDS_REVIEW.
- For healthcare applicants, flag compliance risks related to data residency and state restrictions.
- Output only valid JSON with keys:
  decision ("APPROVE" | "REJECT" | "NEEDS_REVIEW"),
  reasons (string[]),
  missingItems (string[]),
  riskFlags (string[])
`,
});

3) Add a deterministic policy check before the model runs

Do not let the LLM be your first line of defense. Use code for hard rules like consent requirements and document presence.

function precheck(app: LoanApplication) {
  const missingItems: string[] = [];
  const riskFlags: string[] = [];

  if (!app.hasConsentToUseData) {
    riskFlags.push("NO_CONSENT_FOR_DATA_USE");
    return { allowed: false, missingItems, riskFlags };
    }

  if (!app.documentsProvided.includes("bank_statements")) {
    missingItems.push("bank_statements");
  }

  if (app.organizationType === "patient" && app.requestedAmount > 50000) {
    riskFlags.push("HIGH_PATIENT_FINANCING_AMOUNT");
  }

  return { allowed: true, missingItems, riskFlags };
}

4) Run the agent and parse a structured decision

The pattern below validates input first, then asks the AutoGen agent to produce a decision package. In production you would persist both the precheck result and the model response for audit.

async function evaluateLoan(rawInput: unknown) {
  const app = LoanApplicationSchema.parse(rawInput);
  const pre = precheck(app);

  if (!pre.allowed) {
    return {
      decision: "REJECT",
      reasons: ["Consent is required before processing healthcare financing data."],
      missingItems: pre.missingItems,
      riskFlags: pre.riskFlags,
    };
  }

  
const prompt = `
Evaluate this healthcare loan application:

${JSON.stringify({
      ...app,
      complianceContext: {
        jurisdictionalReviewRequired: true,
        dataResidencyRequired: true,
      },
      precheck: pre,
    }, null, 2)}

Return only JSON matching the schema.
`;

const result = await underwritingAgent.run(prompt);
const text = String(result.output ?? result);

return JSON.parse(text);
}

evaluateLoan({
  applicantId: "app_123",
  organizationType: "clinic",
  requestedAmount: 250000,
   annualRevenue:500000,
   monthlyDebtService :12000,
   state:"CA",
   hasConsentToUseData:true,
   documentsProvided:["bank_statements","tax_returns"],
}).then(console.log);

Production Considerations

  • Keep PHI out of prompts by default

All health-related identifiers should be tokenized or replaced with internal IDs before reaching the model. If you must include sensitive fields for underwriting logic, gate them behind explicit consent and log the access event.

  • Pin residency to approved regions

If your lending workflow touches patient or provider data in regulated jurisdictions, keep inference in approved cloud regions. Document where prompts are processed and where transcripts are stored.

  • Store full audit trails

Persist input payload hashes, rule-check results, model outputs, timestamps, reviewer overrides, and final disposition. Healthcare lenders need defensible records for internal audit and external compliance review.

  • Add human override on high-risk cases

Autoapprove only low-risk applications with complete documentation. Anything involving missing consent, unusual request sizes, or ambiguous identity should route to an underwriter queue.

Common Pitfalls

  • Letting the LLM decide on compliance

Bad pattern: asking the model whether consent is valid or whether residency rules apply. Avoid this by encoding those checks in deterministic code before any model call.

  • Sending raw clinical data into prompts

A loan workflow does not need diagnosis codes or treatment notes unless there is a documented business reason. Strip PHI down to what underwriting actually needs.

  • Skipping structured outputs

Free-form natural language makes downstream automation brittle. Force JSON output from AssistantAgent so your workflow can reliably branch on decision, missingItems, and riskFlags.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides