How to Build a loan approval Agent Using LlamaIndex in TypeScript for investment banking
A loan approval agent in investment banking is not a chatbot that says “yes” or “no.” It is a controlled decision-support system that ingests borrower documents, extracts financial signals, checks policy constraints, and produces an auditable recommendation for a credit officer or committee. The value is simple: faster underwriting, consistent policy application, and a traceable decision trail that survives compliance review.
Architecture
For this specific use case, keep the system small and explicit:
- •
Document ingestion layer
- •Pulls PDFs, statements, KYC files, credit memos, and covenant docs from approved storage.
- •Normalizes text before indexing.
- •
Vector index over internal policy and borrower data
- •Stores lending policy, sector guidelines, historical credit notes, and borrower documents.
- •Enables retrieval of the exact clauses used in a recommendation.
- •
Loan decision workflow
- •Orchestrates retrieval, scoring, rule checks, and response generation.
- •Keeps deterministic checks separate from LLM reasoning.
- •
Compliance and audit logger
- •Captures retrieved sources, prompts, model outputs, timestamps, and approver identity.
- •Required for model governance and post-trade-style review.
- •
Guardrail layer
- •Blocks unsupported recommendations.
- •Forces escalation when data is missing or thresholds are breached.
- •
Human approval interface
- •The agent recommends; a banker approves.
- •In investment banking, the final decision should remain with an authorized reviewer.
Implementation
1) Install dependencies and define the document pipeline
Use LlamaIndex in TypeScript with a retriever-backed workflow. Keep your corpus split between lending policy and borrower-specific files so you can explain every answer later.
npm install llamaindex
import { Document, VectorStoreIndex } from "llamaindex";
const policyDocs = [
new Document({
text: `
Corporate Lending Policy:
- Minimum DSCR: 1.25x
- Maximum leverage: 4.0x EBITDA
- Escalate if covenant breach exists
- Reject if KYC is incomplete
`,
metadata: { source: "policy", docType: "lending-policy" },
}),
];
const borrowerDocs = [
new Document({
text: `
Borrower: Apex Manufacturing Ltd.
EBITDA: $12.4M
Total Debt: $41M
DSCR: 1.32x
Leverage: 3.9x
KYC Status: Complete
`,
metadata: { source: "borrower", docType: "financial-pack" },
}),
];
const index = await VectorStoreIndex.fromDocuments([
...policyDocs,
...borrowerDocs,
]);
This gives you one index for retrieval. In production, I would usually separate policy from borrower data into different indexes or namespaces to reduce accidental cross-contamination.
2) Build a retriever-backed query engine
The agent needs grounded answers. Do not let it freewheel on raw prompts when the output affects credit risk.
import { QueryEngineTool } from "llamaindex";
const queryEngine = index.asQueryEngine({
similarityTopK: 3,
});
const loanPolicyTool = QueryEngineTool.fromDefaults({
queryEngine,
name: "loan_policy_lookup",
description:
"Use this tool to retrieve lending policy clauses and borrower financial facts.",
});
This pattern keeps retrieval explicit. You can log every tool call and show which documents influenced the recommendation.
3) Add deterministic underwriting checks before the LLM decides
Use code for rules that must not be probabilistic. The LLM should interpret results, not invent them.
type UnderwritingInput = {
dscr: number;
leverage: number;
kycComplete: boolean;
};
function runCreditChecks(input: UnderwritingInput) {
const flags: string[] = [];
if (input.dscr < 1.25) flags.push("DSCR below minimum threshold");
if (input.leverage > 4.0) flags.push("Leverage above maximum threshold");
if (!input.kycComplete) flags.push("KYC incomplete");
return {
approvedForReview: flags.length === 0,
flags,
};
}
Now combine rule output with retrieved evidence in a controlled prompt:
import { OpenAI } from "llamaindex";
const llm = new OpenAI({ model: "gpt-4o-mini" });
async function generateRecommendation() {
const check = runCreditChecks({
dscr: 1.32,
leverage: 3.9,
kycComplete: true,
});
const response = await llm.complete({
prompt: `
You are assisting an investment banking credit officer.
Use only the supplied facts and policy references.
If any hard rule fails, recommend escalation or rejection.
Rule check result:
${JSON.stringify(check, null, 2)}
Borrower question:
Should Apex Manufacturing Ltd. proceed to credit committee?
Return:
- Recommendation
- Key reasons
- Policy references used
- Escalation notes if any
`,
});
return response.text;
}
console.log(await generateRecommendation());
This is the core pattern: rules first, LLM second. That keeps the system defensible when compliance asks why a file was passed or escalated.
4) Persist audit evidence for review
Investment banking needs traceability across model runs. Store inputs, retrieved chunks, outputs, user identity, and timestamps in an immutable log table or event stream.
type AuditEvent = {
requestId: string;
userId: string;
timestamp: string;
decisionText: string;
};
const auditLog: AuditEvent[] = [];
auditLog.push({
requestId: "loan-req-10021",
userId: "credit.officer@bank.com",
timestamp: new Date().toISOString(),
decisionText:
"Recommend proceed to committee review; all hard thresholds satisfied.",
});
In production, replace this array with WORM storage or an append-only ledger table.
Production Considerations
- •
Deployment
- •Keep the agent inside your bank’s approved network boundary.
- •If you use hosted LLMs, confirm data residency constraints before sending borrower data outside region.
- •
Monitoring
- •Track retrieval quality, tool usage frequency, escalation rate, and override rate by human reviewers.
- •Alert when recommendations drift from policy outcomes or when citations are missing.
- •
Guardrails
- •Block final approval language unless a human signs off.
- •Force escalation when KYC is incomplete, financial statements are stale, or retrieved evidence conflicts with policy.
- •
Compliance
- •Log prompt versions, model versions, retrieved document IDs, and approver identity.
- •Make sure retention matches internal recordkeeping requirements for credit files and model governance.
Common Pitfalls
- •
Letting the LLM make the credit decision directly
- •Fix it by running hard rules in code first.
- •Use the model only to summarize evidence and draft a recommendation for review.
- •
Mixing policy docs with borrower docs without provenance
- •Fix it by storing metadata on every
Document. - •Always return source IDs so auditors can trace where each claim came from.
- •Fix it by storing metadata on every
- •
Ignoring stale financial data
- •Fix it by adding freshness checks on statement dates and covenant reports.
- •If inputs are older than your policy window, force escalation instead of generating a normal recommendation.
A loan approval agent for investment banking should behave like a disciplined analyst with perfect recall and zero authority. Build it around retrieval, deterministic controls, auditability, and human approval gates. That is what makes it usable in a regulated lending workflow instead of just another demo that fails under review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit