How to Build a loan approval Agent Using LangChain in TypeScript for investment banking

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchaintypescriptinvestment-banking

A loan approval agent automates the first-pass underwriting workflow: it ingests borrower data, checks policy constraints, summarizes risk signals, and produces a decision recommendation with an audit trail. For investment banking, this matters because credit decisions need to be faster without losing control over compliance, explainability, and human approval gates.

Architecture

  • Input normalization layer

    • Converts application payloads, financial statements, KYC results, and covenant data into a consistent schema.
    • Rejects incomplete requests before they reach the model.
  • Policy retrieval layer

    • Pulls internal credit policy, sector limits, exposure thresholds, and jurisdiction-specific rules.
    • Keeps the agent aligned with current bank policy instead of hardcoding rules in prompts.
  • Risk analysis chain

    • Uses an LLM to summarize borrower risk, highlight exceptions, and map facts to policy clauses.
    • Produces structured output, not free-form prose.
  • Decision engine

    • Applies deterministic rules for pass/fail conditions like leverage caps, missing docs, sanctions hits, or concentration limits.
    • Keeps final approval logic auditable.
  • Human review handoff

    • Escalates borderline cases to a credit officer.
    • Captures rationale for override or rejection.
  • Audit and logging layer

    • Stores prompts, retrieved policy snippets, model outputs, and final decisions.
    • Supports model governance and regulatory review.

Implementation

1) Define the loan application schema

Start with a strict schema. In banking systems, loose JSON is how you end up with bad decisions and broken audit trails.

import { z } from "zod";

export const LoanApplicationSchema = z.object({
  applicantId: z.string(),
  legalEntityName: z.string(),
  country: z.string(),
  requestedAmount: z.number().positive(),
  annualRevenue: z.number().nonnegative(),
  ebitda: z.number(),
  totalDebt: z.number().nonnegative(),
  kycStatus: z.enum(["passed", "pending", "failed"]),
  sanctionsHit: z.boolean(),
  industry: z.string(),
});

export type LoanApplication = z.infer<typeof LoanApplicationSchema>;

export function calculateLeverage(app: LoanApplication): number {
  if (app.ebitda <= 0) return Number.POSITIVE_INFINITY;
  return app.totalDebt / app.ebitda;
}

2) Build the LangChain decision chain

Use ChatOpenAI, PromptTemplate, RunnableSequence, and StructuredOutputParser. The model should recommend a decision; your code should enforce policy.

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
import { LoanApplicationSchema, calculateLeverage } from "./schema";

const DecisionSchema = z.object({
  recommendation: z.enum(["approve", "reject", "escalate"]),
  rationale: z.string(),
  keyRisks: z.array(z.string()),
});

const parser = StructuredOutputParser.fromZodSchema(DecisionSchema);

const prompt = PromptTemplate.fromTemplate(`
You are a credit analyst for investment banking.
Use only the provided facts. Do not invent data.

Loan application:
{application}

Policy context:
- Sanctions hit => reject
- KYC pending => escalate
- Leverage above 4.5x => escalate
- Negative EBITDA => reject
- Requested amount above $25M requires senior review

Return valid JSON only.

{format_instructions}
`);

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

export async function reviewLoan(applicationInput: unknown) {
  const application = LoanApplicationSchema.parse(applicationInput);
  const leverage = calculateLeverage(application);

  const chain = RunnableSequence.from([
    async () => ({
      application: JSON.stringify({ ...application, leverage }, null, 2),
      format_instructions: parser.getFormatInstructions(),
    }),
    prompt,
    llm,
    parser,
  ]);

  const modelDecision = await chain.invoke({});
  
  return {
    ...modelDecision,
    leverage,
    deterministicChecks: {
      sanctionsHit: application.sanctionsHit,
      kycStatus: application.kycStatus,
      leverage,
      requestedAmount: application.requestedAmount,
    },
  };
}

3) Add hard guardrails before any approval

The LLM should never be the final authority on obvious policy violations. Deterministic checks win every time.

type FinalDecision = {
  decision: "approve" | "reject" | "escalate";
};

export function applyBankPolicy(
  app: LoanApplication,
): FinalDecision {
  const leverage = calculateLeverage(app);

  if (app.sanctionsHit) return { decision: "reject" };
  if (app.kycStatus === "failed") return { decision: "reject" };
  
   if (app.ebitda <= 0) return { decision: "reject" };
   if (app.kycStatus === "pending") return { decision: "escalate" };
   if (leverage > (4.5)) return { decision: "escalate" };
   if (app.requestedAmount > _25000000) return { decision: "escalate" };

   return { decision:"approve" };
}

###4) Combine model output with policy output

This is the production pattern. The model explains; policy decides.

import type { LoanApplication } from "./schema";

export async function processLoanApplication(input:any){
 const app=LoanApplicationSchema.parse(input);
 const aiReview=await reviewLoan(app);
 const policyDecision=applyBankPolicy(app);

 return{
   applicantId : app.applicantId ,
   aiReview ,
   policyDecision ,
   finalDecision : policyDecision.decision === "approve"
     ? aiReview.recommendation === "approve"
       ? "approve"
       : aiReview.recommendation
     : policyDecision.decision
 };
}

Production Considerations

  • Deploy in-region

    • Keep inference and logs in the same jurisdiction as the borrower data.
    • For cross-border lending desks, enforce data residency by region and segregate storage per booking center.
  • Log everything needed for audit

    • Store input payload hashes, retrieved policy versions, prompt templates, model version, timestamps, and final decisions.
    • Regulators care about reproducibility more than clever prompts.
  • Add human-in-the-loop controls

    • Route all escalations and high-value loans to a credit officer.
  • Monitor drift and exception rates

Common Pitfalls

  • Letting the LLM make final credit decisions

How to avoid it: Use the model for summarization and exception analysis only. Deterministic policy code must own approve/reject logic.

  • Using unstructured prompts without schema validation

How to avoid it: Parse inputs with zod and force structured outputs with StructuredOutputParser. If it cannot be parsed, fail closed.

  • Ignoring compliance metadata

How to avoid it: Version your policies, store prompt traces, capture overrides from reviewers, and keep all records searchable for audit and model risk teams.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides