How to Build a loan approval Agent Using LlamaIndex in TypeScript for payments
A loan approval agent for payments takes a borrower application, pulls the right policy and risk context, reasons over it, and returns a decision path that an underwriter or operations team can audit. It matters because payment-linked lending has tighter compliance, faster turnaround expectations, and a higher cost of getting decisions wrong.
Architecture
- •Application intake layer
- •Receives borrower data from your payments stack: identity, transaction history, repayment behavior, chargebacks, merchant data, and requested amount.
- •Document retrieval layer
- •Loads policy docs, credit rules, KYC/AML procedures, and product eligibility criteria into a
VectorStoreIndex.
- •Loads policy docs, credit rules, KYC/AML procedures, and product eligibility criteria into a
- •Decision engine
- •Uses a
QueryEngineorChatEngineto map the application against policy context and produce a structured recommendation.
- •Uses a
- •Risk and compliance guardrails
- •Enforces hard rules before any LLM call: sanctions hits, missing KYC, residency restrictions, minimum score thresholds.
- •Audit logging layer
- •Stores the input payload, retrieved evidence, model output, and final decision for regulators and internal review.
- •Human override path
- •Routes borderline cases to an underwriter with the full evidence trail.
Implementation
1) Install the TypeScript packages
Use the TypeScript LlamaIndex package plus an embedding model provider. For production payments systems, keep the LLM separate from deterministic checks.
npm install llamaindex zod dotenv
2) Load policy documents into a vector index
This example uses Document, VectorStoreIndex, and Settings. The pattern is simple: ingest policies once, query them many times.
import "dotenv/config";
import {
Document,
VectorStoreIndex,
Settings,
OpenAIEmbedding,
} from "llamaindex";
Settings.embedModel = new OpenAIEmbedding({
model: "text-embedding-3-small",
});
async function buildPolicyIndex() {
const docs = [
new Document({
text: `
Loan approval policy:
- Reject if KYC is incomplete.
- Reject if borrower is on sanctions list.
- Approve only if repayment ratio >= 0.35 and chargeback rate < 0.05.
- Escalate to manual review if monthly payment exceeds 20% of average monthly inflow.
`,
metadata: { source: "loan-policy-v1" },
}),
new Document({
text: `
Payments compliance policy:
- Store audit logs for all decisions.
- Keep customer data within approved regions.
- Do not use prohibited attributes in decisioning.
`,
metadata: { source: "payments-compliance-v2" },
}),
];
return await VectorStoreIndex.fromDocuments(docs);
}
3) Add deterministic pre-checks before retrieval
This is where payments-specific risk belongs. Do not ask the model to decide sanctions or KYC status; make those hard gates in code.
type LoanApplication = {
customerId: string;
kycComplete: boolean;
sanctionsHit: boolean;
monthlyInflow: number;
requestedMonthlyPayment: number;
repaymentRatio: number;
chargebackRate: number;
};
function preCheck(app: LoanApplication) {
if (!app.kycComplete) {
return { decision: "REJECT", reason: "KYC incomplete" };
}
if (app.sanctionsHit) {
return { decision: "REJECT", reason: "Sanctions match" };
}
if (app.repaymentRatio < 0.35) {
return { decision: "REJECT", reason: "Repayment ratio below threshold" };
}
if (app.chargebackRate >= 0.05) {
return { decision: "REJECT", reason: "Chargeback rate too high" };
}
if (app.requestedMonthlyPayment > app.monthlyInflow * 0.2) {
return { decision: "REVIEW", reason: "Payment burden too high" };
}
return { decision: "PASS", reason: "Eligible for LLM review" };
}
4) Query the index and produce an auditable recommendation
Use index.asQueryEngine() and keep the prompt narrow. The model should explain how it mapped evidence to policy, not invent financial logic.
import {
OpenAI,
} from "llamaindex";
async function evaluateLoan(app: LoanApplication) {
const gate = preCheck(app);
if (gate.decision !== "PASS") return gate;
const index = await buildPolicyIndex();
Settings.llm = new OpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const queryEngine = index.asQueryEngine({
similarityTopK: 2,
});
const prompt = `
Given this loan application:
${JSON.stringify(app, null, 2)}
Decide whether this should be APPROVE or REVIEW.
Only use the policy context returned by retrieval.
Return:
1. Decision
2. Short rationale
3. Policy citations
`;
const response = await queryEngine.query({ queryStr: prompt });
return {
decisionText:
typeof response === "string" ? response : response.toString(),
auditTrail: {
customerId: app.customerId,
preCheckResult: gate,
policySourcesUsed: ["loan-policy-v1", "payments-compliance-v2"],
},
};
}
(async () => {
const result = await evaluateLoan({
customerId: "cust_123",
kycComplete: true,
sanctionsHit: false,
monthlyInflow: 12000,
requestedMonthlyPayment: 1800,
repaymentRatio: Math.round((1800 / (12000 * .35)) *100)/100,
chargebackRate:.02,
});
console.log(result);
})();
Production Considerations
- •
Keep compliance checks outside the model
Sanctions screening, KYC completeness, residency constraints, and product eligibility should be deterministic code paths. The LLM should only interpret policy text after those gates pass.
- •
Log every decision with evidence
Persist application payload hashes, retrieved chunk IDs, final output, model version, prompt version, and timestamp. For payments teams this is non-negotiable for auditability and dispute handling.
- •
Control data residency
If borrower data cannot leave a region, keep embeddings, vector stores, and LLM endpoints inside that boundary. Avoid sending raw payment transaction histories to external APIs unless your legal/compliance team has signed off.
- •
Add human review thresholds
Borderline affordability cases should route to underwriting rather than auto-decisioning. A good rule is to auto-reject obvious failures, auto-pass only low-risk clean cases, and escalate everything else.
Common Pitfalls
- •
Using the LLM for hard compliance decisions
- •Mistake: asking the model whether a borrower is sanctioned or KYC-complete.
- •Fix:
- •Do those checks in code before any retrieval or generation step.
- •
Letting prompts drift into open-ended advice
- •Mistake:
- •Prompting for “helpful recommendations” without strict output shape.
- •Fix:
- •Force a small schema like
APPROVE | REVIEW | REJECT, plus citations from retrieved policy chunks.
- •Force a small schema like
- •Mistake:
- •
Ignoring audit requirements
- •Mistake:
- •Saving only the final answer.
- •Fix:
- •Store input features, retrieval results from
VectorStoreIndex, model version, prompt text, and final rationale so operations can reconstruct every decision later.
- •Store input features, retrieval results from
- •Mistake:
- •
Mixing sensitive payment data into uncontrolled contexts
- •Mistake:
- •Dumping raw transaction logs into prompts or external telemetry.
- •Fix:
- •Redact PII first, tokenize account identifiers, and keep sensitive datasets inside approved infrastructure with region controls enabled.
- •Mistake:
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit