How to Build a loan approval Agent Using LangChain in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchaintypescriptpension-funds

A loan approval agent for pension funds takes a member’s application, checks policy constraints, pulls the right financial and compliance signals, and produces a recommendation with a full audit trail. For pension funds, this matters because lending decisions are not just about credit risk; they also need to respect fiduciary duty, regulatory constraints, data residency rules, and internal approval policies.

Architecture

  • Application intake layer

    • Accepts structured inputs like borrower profile, requested amount, purpose, tenure, and collateral.
    • Normalizes data before it reaches the agent.
  • Policy retrieval layer

    • Pulls pension-fund-specific lending rules from a vector store or document store.
    • Includes eligibility rules, concentration limits, prohibited sectors, and jurisdiction-specific requirements.
  • Risk evaluation tools

    • Exposes deterministic checks for debt-to-income ratio, LTV, affordability, exposure limits, and KYC/AML flags.
    • Keep these as tools instead of asking the model to “reason” them out.
  • LangChain decision agent

    • Uses ChatOpenAI, createToolCallingAgent, and AgentExecutor.
    • Produces an approval recommendation plus rationale grounded in retrieved policy and tool outputs.
  • Audit logging layer

    • Persists prompts, tool calls, retrieved policy snippets, and final decisions.
    • Required for model governance and post-decision review.
  • Compliance gate

    • Blocks final approval if mandatory checks fail.
    • Ensures human review for edge cases or policy exceptions.

Implementation

1) Define the domain inputs and deterministic checks

Do not start with the LLM. Start with the rules that must never be probabilistic. Pension funds usually need fixed thresholds for affordability, exposure caps, and restricted categories.

import { z } from "zod";

export const LoanApplicationSchema = z.object({
  applicantId: z.string(),
  annualIncome: z.number().positive(),
  monthlyDebtPayments: z.number().nonnegative(),
  requestedAmount: z.number().positive(),
  collateralValue: z.number().positive(),
  purpose: z.string(),
  jurisdiction: z.string(),
});

export type LoanApplication = z.infer<typeof LoanApplicationSchema>;

export function calculateDti(app: LoanApplication) {
  return app.monthlyDebtPayments / (app.annualIncome / 12);
}

export function calculateLtv(app: LoanApplication) {
  return app.requestedAmount / app.collateralValue;
}

export function hardPolicyChecks(app: LoanApplication) {
  const dti = calculateDti(app);
  const ltv = calculateLtv(app);

  return {
    dti,
    ltv,
    passesDti: dti <= 0.35,
    passesLtv: ltv <= 0.75,
    passesPurpose:
      !["crypto", "gambling", "speculative trading"].includes(
        app.purpose.toLowerCase()
      ),
    passesJurisdiction: ["KE", "UG", "TZ", "ZA"].includes(app.jurisdiction),
  };
}

2) Build LangChain tools for policy lookup and scoring

Use tools for anything that can be made deterministic or backed by retrieval. In production, your policy tool should query a controlled repository of pension fund credit policies.

import { tool } from "@langchain/core/tools";
import { z } from "zod";

export const policyLookupTool = tool(
  async ({ query }: { query: string }) => {
    // Replace with vector search or document retrieval against approved policy docs.
    if (query.toLowerCase().includes("max exposure")) {
      return "Maximum unsecured exposure per member is $25,000. Exceptions require committee approval.";
    }
    if (query.toLowerCase().includes("restricted")) {
      return "Restricted purposes include crypto trading, gambling, and unverified offshore lending.";
    }
    return "No specific rule found. Escalate to compliance.";
  },
  {
    name: "policy_lookup",
    description: "Retrieve pension fund lending policy text.",
    schema: z.object({
      query: z.string(),
    }),
  }
);

export const affordabilityTool = tool(
  async ({ annualIncome, monthlyDebtPayments }: { annualIncome: number; monthlyDebtPayments: number }) => {
    const dti = monthlyDebtPayments / (annualIncome / 12);
    return JSON.stringify({
      dti,
      status: dti <= 0.35 ? "pass" : "fail",
    });
  },
  {
    name: "affordability_check",
    description: "Calculate debt-to-income ratio.",
    schema: z.object({
      annualIncome: z.number(),
      monthlyDebtPayments: z.number(),
    }),
  }
);

3) Create the agent with ChatOpenAI, createToolCallingAgent, and AgentExecutor

This is the actual pattern you want in TypeScript. The model handles explanation and synthesis; tools handle facts and rules.

import { ChatOpenAI } from "@langchain/openai";
import { AIMessagePromptTemplate, ChatPromptTemplate } from "@langchain/core/prompts";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { HumanMessage } from "@langchain/core/messages";
import { LoanApplicationSchema } from "./domain";
import { policyLookupTool, affordabilityTool } from "./tools";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a loan approval assistant for a pension fund.
Follow these rules:
- Never approve if hard policy checks fail.
- Use tools for policy lookup and affordability checks.
- Return a concise recommendation with reasons.
- Flag any case needing compliance or committee review.
- Mention audit-relevant facts explicitly.`,
  ],
  ["human", "{input}"],
]);

const tools = [policyLookupTool, affordabilityTool];

const agent = await createToolCallingAgent({
  llm,
  tools,
});

const executor = new AgentExecutor({
  agent,
  tools,
});

export async function assessLoan(rawInput: unknown) {
  
const app = LoanApplicationSchema.parse(rawInput);
  
const result = await executor.invoke({
    input: `
Assess this loan application for a pension fund:
Applicant ID: ${app.applicantId}
Annual income: ${app.annualIncome}
Monthly debt payments: ${app.monthlyDebtPayments}
Requested amount: ${app.requestedAmount}
Collateral value: ${app.collateralValue}
Purpose: ${app.purpose}
Jurisdiction: ${app.jurisdiction}

Use affordability_check for DTI.
Use policy_lookup for any relevant lending restrictions.
Provide one of:
APPROVE | REJECT | ESCALATE
`,
});

return result.output;
}

Step-by-step flow

  1. Parse the application with LoanApplicationSchema.
  2. Run deterministic checks first in your service layer.
  3. If the case passes basic gates, invoke the LangChain agent.
  4. Persist the final output together with tool traces and input hashes.

Production Considerations

  • Deploy in-region

    Pension funds often have strict data residency requirements. Keep PII and decision logs in approved regions only; do not send raw member data to external endpoints unless your legal basis and vendor contracts explicitly allow it.

  • Separate recommendation from decision

    Let the agent recommend APPROVE, REJECT, or ESCALATE, but keep final approval in a workflow engine or human committee when thresholds are near limits or policy exceptions are involved.

  • Log everything needed for audit

    Store:

    • input payload hash
    • retrieved policy snippets
    • tool outputs
    • model version
    • final recommendation
    • reviewer override if any
  • Add guardrails at the API boundary

    Block unsupported purposes before calling the model. For pension funds, that means screening speculative use cases, related-party lending issues, concentration breaches, and jurisdiction mismatches early.

Common Pitfalls

  • Letting the model infer hard rules

    Don’t ask the LLM to decide DTI thresholds or exposure caps from memory. Put those into code or governed policy retrieval so they remain stable under audit.

  • Skipping explainability

    A bare “approve/reject” is not enough for pension-fund operations. Always capture which rule passed or failed and which policy text influenced the recommendation.

  • Using unbounded prompts with raw member data

    Dumping full application records into the prompt increases privacy risk and makes audits messy. Pass only what the agent needs, redact sensitive fields where possible, and keep PII handling outside the LLM path when you can.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides