How to Build a loan approval Agent Using LangChain in TypeScript for wealth management

By Cyprian AaronsUpdated 2026-04-21
loan-approvallangchaintypescriptwealth-management

A loan approval agent for wealth management takes a client’s application, pulls the right financial context, checks policy and risk rules, and produces a decision recommendation with an audit trail. It matters because wealth clients expect faster turnaround, but the firm still needs hard controls around suitability, compliance, data residency, and explainability.

Architecture

  • Input normalization layer

    • Converts raw application data into a typed internal schema.
    • Handles client identity, requested amount, collateral, income, liabilities, and jurisdiction.
  • Document retrieval layer

    • Pulls policy docs, lending guidelines, KYC notes, and product eligibility rules.
    • Uses VectorStoreRetriever so the agent grounds decisions in current firm policy.
  • Decision orchestration layer

    • Uses LangChain RunnableSequence or RunnableLambda to combine rules, retrieval, and model reasoning.
    • Keeps deterministic checks separate from LLM judgment.
  • Compliance guardrail layer

    • Blocks disallowed outputs like unsupported approval language or missing rationale.
    • Enforces audit-ready structured responses.
  • Audit logging layer

    • Stores inputs, retrieved evidence, model output, and final decision.
    • Needed for model risk management and regulatory review.
  • Human review handoff

    • Routes borderline cases to an underwriter or relationship manager.
    • Critical for high-net-worth clients where exceptions are common but must be approved explicitly.

Implementation

1) Define the application schema and decision output

Use zod to keep the agent’s inputs and outputs strict. In wealth management, loose JSON is how you end up with unreviewable decisions.

import { z } from "zod";

export const LoanApplicationSchema = z.object({
  clientId: z.string(),
  jurisdiction: z.string(),
  requestedAmount: z.number().positive(),
  annualIncome: z.number().nonnegative(),
  liquidAssets: z.number().nonnegative(),
  liabilities: z.number().nonnegative(),
  creditScore: z.number().int().min(300).max(850),
  purpose: z.string(),
});

export const LoanDecisionSchema = z.object({
  decision: z.enum(["approve", "reject", "review"]),
  confidence: z.number().min(0).max(1),
  rationale: z.string(),
  policyReferences: z.array(z.string()),
});

export type LoanApplication = z.infer<typeof LoanApplicationSchema>;
export type LoanDecision = z.infer<typeof LoanDecisionSchema>;

2) Load policy documents into a retriever

This example uses a vector store retriever to ground the agent in lending policy. You can swap the embedding provider and vector DB for your environment; the pattern stays the same.

import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "@langchain/core/documents";

const policyDocs = [
  new Document({
    pageContent:
      "For unsecured loans above $250k, require manual underwriting review.",
    metadata: { source: "lending-policy-2025", section: "4.2" },
  }),
  new Document({
    pageContent:
      "Debt-to-income ratio must remain below 40% unless approved by credit committee.",
    metadata: { source: "lending-policy-2025", section: "3.1" },
  }),
];

const embeddings = new OpenAIEmbeddings({
  apiKey: process.env.OPENAI_API_KEY,
});

const vectorStore = await MemoryVectorStore.fromDocuments(policyDocs, embeddings);
const retriever = vectorStore.asRetriever(3);

3) Build the decision chain with LangChain runnables

This is the core pattern. Keep deterministic checks outside the model, retrieve policy context first, then ask the model for a structured recommendation using withStructuredOutput.

import { ChatOpenAI } from "@langchain/openai";
import {
  RunnableLambda,
  RunnableSequence,
} from "@langchain/core/runnables";
import { PromptTemplate } from "@langchain/core/prompts";
import { LoanApplicationSchema, LoanDecisionSchema } from "./schemas";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const prompt = PromptTemplate.fromTemplate(`
You are a loan approval assistant for a wealth management firm.

Client application:
{application}

Policy context:
{policyContext}

Return a decision that follows firm policy exactly.
If policy requires manual review, do not approve directly.
`);

const deterministicChecks = RunnableLambda.from(async (app: unknown) => {
  const application = LoanApplicationSchema.parse(app);

  const dti =
    application.liabilities / Math.max(application.annualIncome, 1);

  if (application.requestedAmount > application.liquidAssets * 2) {
    return {
      application,
      forceReview: true,
      reason: "Requested amount exceeds liquidity threshold",
      dti,
    };
    }

  return {
    application,
    forceReview: dti > .4,
    reason: dti > .4 ? "DTI above policy threshold" : null,
    dti,
   };
});

export async function decideLoan(appInput: unknown) {
 const precheck = await deterministicChecks.invoke(appInput);
 const docs = await retriever.invoke(JSON.stringify(precheck.application));

 const policyContext = docs
   .map((d) => `${d.pageContent} [${d.metadata.source}:${d.metadata.section}]`)
   .join("\n");

 const chain = RunnableSequence.from([
   prompt,
   llm.withStructuredOutput(LoanDecisionSchema),
 ]);

 const result = await chain.invoke({
   application: JSON.stringify(precheck.application),
   policyContext,
 });

 if (precheck.forceReview) {
   return {
     decision: "review",
     confidence: result.confidence,
     rationale: `${precheck.reason}. ${result.rationale}`,
     policyReferences: result.policyReferences,
   };
 }

 return result;
}

4) Add audit logging before returning a final recommendation

Wealth management teams need traceability. Log what was asked, what evidence was retrieved, what the model returned, and why the final action changed if it did.

type AuditRecord = {
 timestamp: string;
 clientId?: string;
 input: unknown;
 retrievedSources: string[];
 modelOutput: unknown;
 finalDecision: unknown;
};

export async function decideLoanWithAudit(appInput: unknown) {
 const precheck = await deterministicChecks.invoke(appInput);
 const docs = await retriever.invoke(JSON.stringify(precheck.application));

 const result = await decideLoan(appInput);

 const record: AuditRecord = {
   timestamp: new Date().toISOString(),
   clientId:
     typeof appInput === "object" && appInput !== null
       ? (appInput as any).clientId
       : undefined,
   input: appInput,
   retrievedSources: docs.map((d) => String(d.metadata.source)),
   modelOutput: result,
   finalDecision:
     precheck.forceReview && result.decision === "approve"
       ? { ...result, decisionOverrideToReview: true }
       : result,
 };

 console.log(JSON.stringify(record));
 return record.finalDecision;
}

Production Considerations

  • Keep client data in-region

    • Wealth firms often have residency constraints by jurisdiction.
    • Deploy embeddings storage and inference endpoints in approved regions only.
  • Separate hard rules from LLM reasoning

    • Credit thresholds, DTI limits, AML flags, and sanction hits should be deterministic checks.
    • The LLM should explain and classify; it should not invent policy.
  • Log every decision path

    • Store input payloads, retrieved documents, prompts, outputs, and overrides.
    • This is mandatory for auditability and model governance reviews.
  • Add human approval for exceptions

    • Any case above exposure thresholds or outside standard products should route to an underwriter.
    • Wealth clients generate edge cases; don’t let the agent auto-finalize them.

Common Pitfalls

  • Using the LLM as the source of truth

    If you ask the model to “decide” without retrieval and rule checks, it will hallucinate policy. Fix this by grounding with retriever.invoke() and enforcing prechecks before any recommendation is accepted.

  • Returning free-form text instead of structured output

    Free-form responses are hard to audit and impossible to validate reliably. Use withStructuredOutput() plus a Zod schema so downstream systems can consume stable fields like decision, confidence, and policyReferences.

  • Ignoring jurisdiction-specific controls

    A loan workflow that works in one market may violate rules in another. Include jurisdiction in your schema early and route policy retrieval by region so compliance logic matches local requirements.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides