How to Build a loan approval Agent Using LangChain in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchaintypescriptbanking

A loan approval agent automates the first pass of a credit application: it extracts applicant data, checks policy rules, scores risk signals, and produces a decision recommendation with an audit trail. For banking, this matters because you want faster turnaround without losing control over compliance, explainability, and human override.

Architecture

  • Application intake

    • Receives structured loan requests from your channel layer: web, CRM, branch system, or middleware.
    • Normalizes fields like income, employment status, debt obligations, requested amount, and jurisdiction.
  • Policy retrieval

    • Pulls bank-specific lending policy from a controlled source.
    • Uses retrieval so the agent can cite current underwriting rules instead of hardcoding them.
  • Risk evaluation chain

    • Converts application data into a structured decision prompt.
    • Produces a recommendation such as approve, reject, or manual_review with reasons.
  • Guardrails and validation

    • Enforces schema validation on inputs and outputs.
    • Blocks unsupported actions like making final credit decisions without human approval where required.
  • Audit logging

    • Stores prompts, retrieved policy snippets, model output, timestamps, and decision metadata.
    • Supports internal review, regulator requests, and model governance.
  • Human-in-the-loop escalation

    • Routes borderline cases to an underwriter.
    • Keeps the agent as a decision support layer, not an uncontrolled approval engine.

Implementation

1) Install dependencies and define the application schema

Use LangChain’s TypeScript packages plus Zod for strict validation. In banking, this is non-negotiable because bad input shape becomes bad decisions fast.

npm install langchain @langchain/openai @langchain/core zod

Define the request and response contracts first:

import { z } from "zod";

export const LoanApplicationSchema = z.object({
  applicantId: z.string().min(1),
  country: z.string().min(2),
  annualIncome: z.number().positive(),
  monthlyDebtPayments: z.number().nonnegative(),
  requestedAmount: z.number().positive(),
  employmentStatus: z.enum(["employed", "self_employed", "unemployed", "retired"]),
  creditScore: z.number().int().min(300).max(850),
});

export const LoanDecisionSchema = z.object({
  decision: z.enum(["approve", "reject", "manual_review"]),
  reason: z.string(),
  riskFlags: z.array(z.string()),
});

2) Load policy context with a retriever

For production banking systems, keep underwriting policy in a controlled document store and retrieve only the relevant sections. That keeps your prompts small and your policy versioned.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a banking loan assessment assistant.
Return only valid JSON matching this schema:
{ "decision": "approve" | "reject" | "manual_review", "reason": string, "riskFlags": string[] }

Use bank policy strictly. Do not invent facts. If information is missing or ambiguous, choose manual_review.`,
  ],
  [
    "human",
    `Applicant:
{application}

Policy:
{policy}`,
  ],
]);

const chain = RunnableSequence.from([
  async (input: { application: string; policy: string }) => input,
  prompt,
  llm,
  new StringOutputParser(),
]);

If you already have a retriever backed by vector search or keyword search, plug it in before the chain. The pattern stays the same: fetch policy context first, then ask the model to assess against that context.

3) Execute the decision flow with strict parsing

You want structured output you can validate before anything reaches downstream systems. Parse the response into your Zod schema and reject malformed outputs immediately.

import { LoanApplicationSchema, LoanDecisionSchema } from "./schemas";

async function assessLoan(applicationInput: unknown) {
  const application = LoanApplicationSchema.parse(applicationInput);

  const policyText = `
    Policy v12:
    - Maximum debt-to-income ratio is 40%.
    - Minimum credit score for auto-approval is 680.
    - Applications above $50,000 require manual review.
    - Self-employed applicants require two years of income history.
    - Any missing KYC or residency mismatch requires manual review.
    `;

  const dti =
    application.monthlyDebtPayments / (application.annualIncome / 12);

  const resultText = await chain.invoke({
    application: JSON.stringify({ ...application, debtToIncomeRatio: dti }),
    policy: policyText,
  });

  const parsed = JSON.parse(resultText);
  const decision = LoanDecisionSchema.parse(parsed);

  return {
    applicantId: application.applicantId,
    debtToIncomeRatio: dti,
    ...decision,
    policyVersion: "v12",
    reviewedAt: new Date().toISOString(),
  };
}

This gives you three useful controls:

  • Input validation before inference
  • Deterministic business metrics like DTI computed in code
  • Output validation before any workflow action

4) Add human review routing for borderline cases

Do not let the model be the final authority on regulated lending decisions unless your legal and compliance teams have explicitly approved that operating model. Use it to recommend and explain.

async function routeDecision(applicationInput: unknown) {
  const assessment = await assessLoan(applicationInput);

   if (
     assessment.decision === "manual_review" ||
     assessment.riskFlags.length > 0
   ) {
     return {
       ...assessment,
       queue: "underwriter-review",
       actionRequired: true,
     };
   }

   return {
     ...assessment,
     queue: "straight-through-processing",
     actionRequired: false,
   };
}

That pattern keeps automation where it belongs:

  • Straight-through processing for clean cases
  • Manual review for exceptions
  • Full traceability for every outcome

Production Considerations

  • Deployment boundaries

    • Keep customer data inside approved regions to satisfy data residency requirements.
    • Use private networking between your app layer, vector store, and model endpoint where possible.
  • Monitoring

    • Log prompt version, policy version, model version, latency, output schema validity, and final disposition.
    • Track approval rates by segment so you can catch drift or unintended bias early.
  • Guardrails

    • Enforce allowlisted actions only; the agent should recommend decisions, not move funds or book loans directly.
    • Add rule-based checks for hard constraints like minimum age policies, missing KYC status, sanctions hits, or DTI thresholds.
  • Auditability

    • Store raw inputs and outputs immutably with timestamps and correlation IDs.
    • Make sure every recommendation can be reconstructed later for internal audit or regulator review.

Common Pitfalls

  1. Letting the LLM calculate core financial metrics

    • Don’t ask the model to compute DTI or affordability from scratch.
    • Compute those values in TypeScript and pass them into the prompt as trusted inputs.
  2. Using free-form text outputs in production

    • If you accept plain English responses, you will eventually parse garbage.
    • Always validate model output with Zod or equivalent schema enforcement before downstream use.
  3. Hardcoding underwriting logic inside prompts

    • Prompt text is not a control system.
    • Keep policy in versioned documents or rules engines so compliance can update thresholds without redeploying code.
  4. Skipping human review on edge cases

    • Borderline applications are where regulators look first.
    • Route low-confidence or exception cases to an underwriter with the full audit trail attached.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides