How to Build a loan approval Agent Using LangChain in TypeScript for fintech
A loan approval agent helps a fintech decide whether an application should move forward, be declined, or be escalated for human review. It matters because lending decisions need to be fast, consistent, auditable, and compliant with policy — especially when you’re dealing with KYC data, income verification, credit policy rules, and regulator scrutiny.
Architecture
- •
Application intake layer
- •Receives applicant data from your API or workflow engine.
- •Normalizes fields like income, debt, employment status, and requested amount.
- •
Policy retrieval layer
- •Pulls the latest lending policy, risk thresholds, and product rules.
- •Usually backed by a vector store or document store so policy changes don’t require code changes.
- •
Decisioning agent
- •Uses LangChain to reason over the application plus policy context.
- •Produces a structured output such as
approve,decline, ormanual_review.
- •
Guardrail and validation layer
- •Enforces schema validation, forbidden attributes, and deterministic business rules.
- •Prevents the model from making unsupported decisions outside policy bounds.
- •
Audit logging layer
- •Stores input snapshot, retrieved policy references, model output, and final decision.
- •Needed for compliance reviews, dispute handling, and model governance.
- •
Human review handoff
- •Routes borderline cases to underwriters.
- •Keeps high-risk or ambiguous cases out of fully automated approval.
Implementation
1. Define the decision schema
For fintech, don’t let the model return free-form text. Use zod with LangChain’s structured output so every decision is machine-checkable.
import { z } from "zod";
export const LoanDecisionSchema = z.object({
decision: z.enum(["approve", "decline", "manual_review"]),
confidence: z.number().min(0).max(1),
reasons: z.array(z.string()).min(1),
policyReferences: z.array(z.string()).default([]),
});
export type LoanDecision = z.infer<typeof LoanDecisionSchema>;
export interface LoanApplication {
applicantId: string;
annualIncome: number;
monthlyDebt: number;
requestedAmount: number;
employmentStatus: "employed" | "self_employed" | "unemployed";
creditScore?: number;
}
2. Build a prompt that forces policy-based reasoning
Use ChatPromptTemplate and keep the instructions narrow. In lending workflows, the model should explain decisions using only approved factors.
import { ChatPromptTemplate } from "@langchain/core/prompts";
export const loanPrompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a loan decisioning assistant for a fintech lender.
Use only the provided application data and lending policy context.
Do not mention protected attributes.
If information is missing or ambiguous, return manual_review.`,
],
[
"human",
`Application:
{application}
Policy context:
{policyContext}
Return a structured decision.`,
],
]);
3. Wire the LangChain model with structured output
This is the core pattern. Use a chat model that supports tool/structured output behavior through withStructuredOutput. The result is typed and easier to validate before writing to your audit log.
import { ChatOpenAI } from "@langchain/openai";
import { RunnableSequence } from "@langchain/core/runnables";
import { LoanDecisionSchema } from "./schema";
import { loanPrompt } from "./prompt";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const decisionModel = model.withStructuredOutput(LoanDecisionSchema);
export const loanApprovalChain = RunnableSequence.from([
async (input: {
application: string;
policyContext: string;
applicantId: string;
}) => ({
application: input.application,
policyContext: input.policyContext,
applicantId: input.applicantId,
}),
loanPrompt,
decisionModel,
]);
4. Run deterministic pre-checks before invoking the LLM
In production lending systems, some rules should never depend on an LLM. Hard-fail obvious cases first, then send only borderline applications into LangChain.
import { loanApprovalChain } from "./chain";
function deterministicCheck(app: {
annualIncome: number;
monthlyDebt: number;
requestedAmount: number;
}) {
const dti = app.monthlyDebt / (app.annualIncome / 12);
if (app.requestedAmount > app.annualIncome * 5) {
return { decision: "decline" as const, reasons: ["Requested amount exceeds product cap"] };
}
if (dti > 0.55) {
return { decision: "manual_review" as const, reasons: ["Debt-to-income ratio above threshold"] };
}
return null;
}
async function decideLoan(application: any, policyContext: string) {
const precheck = deterministicCheck(application);
if (precheck) {
return {
applicantId: application.applicantId,
...precheck,
confidence: 1,
policyReferences: ["hard-rule"],
};
}
const result = await loanApprovalChain.invoke({
applicantId: application.applicantId,
application: JSON.stringify(application),
policyContext,
});
return result;
}
(async () => {
const decision = await decideLoan(
{
applicantId: "app_123",
annualIncome: 90000,
monthlyDebt:3000,
requestedAmount:"25000",
employmentStatus:"employed",
creditScore":720
},
"Approve if credit score >=700 and DTI <=0.45; manual review if DTI <=0.55."
);
console.log(decision);
})();
Production Considerations
- •
Auditability
- •Persist every request/response pair with timestamps, model version, prompt version, retrieved policy docs, and final outcome.
- •Regulators will care about why a borrower was approved or declined.
- •
Data residency
- •Keep PII in-region if your lending stack operates under local residency requirements.
- •If you use hosted models or vector stores, verify where embeddings and logs are stored.
- •
Guardrails
- •Block protected attributes like race, religion, gender identity, marital status, and proxies where your legal team requires it.
- •
Monitoring
Track approval rate drift, manual review rate, false decline rate, latency per request, and prompt failure rate. If those metrics shift after a prompt or policy update, roll back fast.
Common Pitfalls
- •Letting the LLM make hard credit decisions alone
Use deterministic underwriting rules for non-negotiable thresholds like max exposure or minimum income multiples. The model should assist with classification and explanation, not replace your risk engine.
- •Skipping schema validation
Free-form JSON-looking text will break downstream systems.
Always use zod with withStructuredOutput or validate outputs before persisting them.
- •Ignoring compliance traceability
If you can’t reconstruct which policy version drove a decline three months later, you have an audit problem. Store prompt versions, retrieved documents, model IDs, and final reasons alongside each decision.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit