How to Build a loan approval Agent Using LlamaIndex in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
loan-approvalllamaindextypescriptfintech

A loan approval agent helps a lender collect applicant data, retrieve policy and product rules, evaluate eligibility, and produce a decision package that a human underwriter or downstream workflow can trust. In fintech, this matters because the agent needs to be fast, consistent, auditable, and constrained by policy — not just “smart.”

Architecture

  • Applicant intake layer

    • Accepts structured inputs like income, employment type, credit score band, existing debt, and requested amount.
    • Normalizes values before they hit the agent.
  • Policy knowledge base

    • Stores underwriting rules, product eligibility criteria, and compliance notes.
    • Indexed with LlamaIndex so the agent can retrieve the exact rule snippets used for a decision.
  • Decision engine

    • Uses a query engine or agent workflow to compare applicant facts against retrieved policy.
    • Produces an approve / refer / decline outcome with reasons.
  • Audit trail store

    • Persists every input, retrieved document chunk, model response, and final decision.
    • Required for model risk management, internal audit, and regulator review.
  • Guardrails layer

    • Enforces redaction of sensitive data, output schema validation, and deterministic decision formatting.
    • Prevents free-form explanations from leaking unsupported reasoning.
  • Human review queue

    • Handles borderline cases: thin-file applicants, policy exceptions, or low-confidence matches.
    • Keeps final authority with underwriters where required.

Implementation

1) Install dependencies and set up your policy corpus

Use LlamaIndex in TypeScript with a small policy corpus first: underwriting rules, KYC requirements, and product constraints. In production you would load these from versioned documents in object storage or a CMS.

npm install llamaindex zod
import { Document } from "llamaindex";

export const policyDocs = [
  new Document({
    text: `
      Personal Loan Policy v3:
      - Minimum monthly income: $3,000
      - Maximum debt-to-income ratio: 45%
      - Minimum employment tenure: 6 months
      - Decline if applicant is on sanctions list
      - Refer to manual review if credit score is below 620 but above 580
    `,
    metadata: { docType: "policy", version: "v3", jurisdiction: "US" },
  }),
];

2) Build a vector index over underwriting rules

The agent should retrieve only relevant policy chunks before making any decision. That gives you traceability: you can show exactly which rule was used.

import {
  VectorStoreIndex,
  storageContextFromDefaults,
} from "llamaindex";
import { policyDocs } from "./policyDocs";

async function buildIndex() {
  const storageContext = await storageContextFromDefaults();
  return await VectorStoreIndex.fromDocuments(policyDocs, {
    storageContext,
  });
}

export { buildIndex };

3) Create a deterministic decision function around retrieval

Do not let the model “invent” lending logic. Use retrieval for grounding, then apply explicit business rules in code. This keeps the final decision explainable and easier to validate in testing.

import {
  QueryEngineTool,
} from "llamaindex";
import { z } from "zod";
import { buildIndex } from "./buildIndex";

const ApplicantSchema = z.object({
  monthlyIncome: z.number(),
  debtToIncomeRatio: z.number(),
  employmentTenureMonths: z.number(),
  creditScore: z.number(),
});

type Applicant = z.infer<typeof ApplicantSchema>;

export async function evaluateLoanApplicant(applicantInput: unknown) {
  const applicant = ApplicantSchema.parse(applicantInput);
  const index = await buildIndex();
  const queryEngine = index.asQueryEngine();

  const tool = QueryEngineTool.fromDefaults({
    queryEngine,
    name: "underwriting_policy_search",
    description: "Searches loan underwriting policies and compliance rules.",
  });

  const prompt = `
Applicant facts:
- Monthly income: ${applicant.monthlyIncome}
- Debt-to-income ratio: ${applicant.debtToIncomeRatio}
- Employment tenure months: ${applicant.employmentTenureMonths}
- Credit score: ${applicant.creditScore}

Use the underwriting policy search tool to retrieve relevant rules.
Return only:
1) decision: approve | refer | decline
2) reasons: string[]
3) policy_refs: string[]
`;

  const response = await tool.queryEngine.query({ query: prompt });
  return response.toString();
}

That pattern is useful when you want LlamaIndex to handle retrieval while your application keeps control of the actual lending logic. If you want stricter control, replace the free-text response with a JSON schema and validate it before writing anything downstream.

4) Add an auditable wrapper for compliance

For fintech, every decision must be reproducible. Store inputs, retrieved context IDs, model output, timestamp, and the code version that made the call.

import fs from "node:fs/promises";
import crypto from "node:crypto";
import { evaluateLoanApplicant } from "./evaluateLoanApplicant";

export async function decideWithAudit(applicantInput: unknown) {
  const result = await evaluateLoanApplicant(applicantInput);

  const auditRecord = {
    requestId: crypto.randomUUID(),
    timestamp: new Date().toISOString(),
    applicantInput,
    result,
    serviceVersion: process.env.SERVICE_VERSION ?? "dev",
    region: process.env.DATA_REGION ?? "us-east-1",
  };

   await fs.appendFile(
    "./audit-log.jsonl",
    JSON.stringify(auditRecord) + "\n"
   );

   return result;
}

Production Considerations

  • Data residency

    • Keep applicant PII and audit logs in-region.
    • If your bank requires EU-only processing or US-only processing, pin both storage and model endpoints to that jurisdiction.
  • Compliance and explainability

    • Version your policy documents and persist the exact chunk IDs returned by retrieval.
    • Underwriters should be able to reconstruct why an application was referred or declined.
  • Monitoring

    • Track approval rate drift by segment: income band, geography, product type.
    • Alert on spikes in manual review referrals or sudden changes in retrieved policy coverage.
  • Guardrails

    • Redact SSNs, account numbers, and government IDs before sending text into any LLM call.
    • Validate outputs against a strict schema so the agent cannot emit unsupported decisions or free-form advice.

Common Pitfalls

  • Using the model as the source of truth

    Don’t ask the LLM to invent lending thresholds. Put thresholds in code or policy docs indexed by LlamaIndex. The model should retrieve and summarize; your service should decide.

  • Skipping audit metadata

    If you do not store document versions, timestamps, request IDs, and service versions, you cannot defend decisions later. In regulated lending flows that is a serious operational gap.

  • Mixing PII with broad retrieval scopes

    Do not dump raw customer records into general-purpose indexes. Separate policy knowledge from customer data stores, minimize fields sent to retrieval, and keep sensitive data out of prompts unless it is strictly required for the decision.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides