How to Build a loan approval Agent Using LlamaIndex in TypeScript for healthcare
A loan approval agent for healthcare reviews financing requests for things like medical equipment, clinic expansion, patient payment plans, and working capital. The point is not to auto-approve debt blindly; it is to standardize intake, retrieve policy and risk context, and produce a decision with an audit trail that compliance teams can review.
Architecture
- •
Application intake layer
- •Receives borrower data, entity type, requested amount, purpose, and supporting documents.
- •Normalizes structured fields before they hit the agent.
- •
Policy and eligibility index
- •Stores lending policy docs, underwriting rules, and healthcare-specific exclusions.
- •Backed by
VectorStoreIndexso the agent can retrieve the right policy passages.
- •
Document ingestion pipeline
- •Parses PDFs, bank statements, tax returns, and provider credentials.
- •Uses
SimpleDirectoryReaderor custom loaders before indexing.
- •
Decision agent
- •Uses LlamaIndex chat/agent tooling to answer: approve, reject, or escalate.
- •Combines retrieved policy with the application payload.
- •
Audit and trace store
- •Persists every input, retrieved chunk, final rationale, and decision.
- •Required for model risk management and healthcare compliance reviews.
- •
Guardrail layer
- •Blocks decisions when required fields are missing or PHI handling is unsafe.
- •Enforces human review for borderline cases.
Implementation
1) Install dependencies and load policy documents
Use LlamaIndex’s TypeScript packages and keep your policy corpus separate from borrower data. In healthcare lending, that separation matters because policy content can be broadly accessible while PHI must stay tightly controlled.
npm install llamaindex zod
import { SimpleDirectoryReader } from "llamaindex";
async function loadPolicyDocs() {
const reader = new SimpleDirectoryReader();
const docs = await reader.loadData({
directoryPath: "./data/policies",
});
return docs;
}
2) Build a vector index for underwriting policy
The agent needs fast retrieval over eligibility rules, documentation requirements, and exceptions. VectorStoreIndex.fromDocuments() is the core pattern here.
import {
VectorStoreIndex,
Settings,
} from "llamaindex";
async function buildPolicyIndex() {
const docs = await loadPolicyDocs();
// Configure your embedding model through Settings in your app bootstrap.
// Example assumes you have set API keys in environment variables.
const index = await VectorStoreIndex.fromDocuments(docs);
return index;
}
3) Create a retrieval-backed approval function
For a real workflow, I prefer deterministic orchestration around an LLM rather than a free-form “agent decides everything” setup. Retrieve the relevant policy chunks first, then ask the model to classify the request with explicit output constraints.
import {
QueryEngineTool,
OpenAI,
} from "llamaindex";
import { z } from "zod";
const DecisionSchema = z.object({
decision: z.enum(["approve", "reject", "escalate"]),
reason: z.string(),
citations: z.array(z.string()).min(1),
});
type LoanApplication = {
applicantName: string;
entityType: "clinic" | "hospital" | "practice" | "vendor";
amountUsd: number;
purpose: string;
state: string;
};
export async function reviewLoanApplication(app: LoanApplication) {
const index = await buildPolicyIndex();
const queryEngine = index.asQueryEngine();
const prompt = `
You are a healthcare lending underwriter.
Use only the retrieved policy context to decide on this application.
Application:
${JSON.stringify(app, null, 2)}
Return JSON with:
- decision: approve | reject | escalate
- reason
- citations: array of quoted policy snippets used in the decision
`;
const response = await queryEngine.query({ query: prompt });
const llm = new OpenAI({ model: "gpt-4o-mini" });
const parsedText = String(response);
// In production you should validate/repair JSON before trusting it.
const jsonMatch = parsedText.match(/\{[\s\S]*\}/);
if (!jsonMatch) throw new Error("Model did not return JSON");
const result = DecisionSchema.parse(JSON.parse(jsonMatch[0]));
return result;
}
4) Add audit logging and human escalation
Healthcare lending needs traceability. Store the application payload, retrieved evidence, final output, model version, and timestamp so compliance can reconstruct why a decision happened.
import fs from "node:fs/promises";
async function auditDecision(app: LoanApplication) {
const result = await reviewLoanApplication(app);
await fs.appendFile(
"./audit-log.jsonl",
JSON.stringify({
timestamp: new Date().toISOString(),
application: app,
result,
model: "gpt-4o-mini",
workflow: "healthcare-loan-review",
}) + "\n"
);
if (result.decision === "escalate") {
// Route to human underwriter queue here.
console.log("Escalated for manual review");
return;
}
console.log("Final decision:", result);
}
Production Considerations
- •Compliance controls
Keep PHI out of prompts unless you have a documented legal basis and controls for HIPAA handling. Separate borrower identity data from policy retrieval data, encrypt at rest, and restrict access by role.
- •Auditability
Log every retrieval hit and final rationale. For regulated lending in healthcare, you need to explain why one clinic was approved while another was escalated.
- •Data residency
If your borrower data includes patient-related financial records or protected health information tied to operations, keep processing in approved regions. Make sure embeddings storage and vector indexes stay inside your residency boundary.
- •Human-in-the-loop thresholds
Auto-decide only low-risk cases with complete documentation. Escalate anything involving unusual ownership structures, high exposure limits, missing financials, or ambiguous medical use cases.
Common Pitfalls
- •Using raw chat output as the final decision
Don’t trust unvalidated text from the model. Force structured output with a schema like zod, then reject anything that fails validation.
- •Mixing PHI into general-purpose prompts
If you paste clinical details into a broad underwriting prompt, you create unnecessary compliance risk. Redact patient-level data unless it is absolutely required for the lending decision.
- •Skipping retrieval grounding
A loan approval agent without policy retrieval becomes a generic classifier. Always ground decisions in indexed underwriting rules so compliance can trace each outcome back to source text.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit