How to Build a loan approval Agent Using LangChain in TypeScript for lending
A loan approval agent helps a lending team triage applications, extract the right facts, check policy rules, and produce a recommendation with an audit trail. It matters because underwriting is expensive, slow, and inconsistent when humans manually read every file; the agent should reduce turnaround time without turning credit decisions into a black box.
Architecture
- •Application intake layer
- •Accepts structured fields from your loan origination system plus unstructured documents like bank statements, payslips, and IDs.
- •Document parsing and normalization
- •Uses loaders and text splitters to turn PDFs, emails, and notes into clean text chunks for downstream reasoning.
- •Policy retrieval layer
- •Pulls lending policy snippets, product rules, affordability thresholds, and regulatory constraints from a vector store or document index.
- •Decisioning chain
- •Combines applicant data with retrieved policy context to generate a recommendation: approve, refer, or decline.
- •Audit and explainability store
- •Persists the model input, retrieved policy passages, outputs, and final decision rationale for compliance review.
- •Human-in-the-loop gate
- •Routes borderline cases to an underwriter instead of auto-deciding when confidence is low or policy exceptions are detected.
Implementation
1) Install the LangChain packages you actually need
For TypeScript in production, keep the dependency surface small. You need core LangChain primitives plus a model provider and a vector store if you are retrieving policy docs.
npm install langchain @langchain/core @langchain/openai zod
If you are using document retrieval, add your vector store package separately. For this example, we will keep retrieval simple and focus on the decision chain pattern.
2) Define the application schema and decision output
Loan decisions need structured outputs. Do not ask the model for free-form prose and then parse it later if you can avoid it.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
import { z } from "zod";
const LoanApplicationSchema = z.object({
applicantName: z.string(),
monthlyIncome: z.number().positive(),
monthlyDebtPayments: z.number().nonnegative(),
requestedAmount: z.number().positive(),
termMonths: z.number().int().positive(),
employmentStatus: z.enum(["employed", "self_employed", "unemployed"]),
country: z.string(),
});
const DecisionSchema = z.object({
decision: z.enum(["approve", "refer", "decline"]),
riskBand: z.enum(["low", "medium", "high"]),
reasons: z.array(z.string()).min(1),
policyFlags: z.array(z.string()),
});
type LoanApplication = z.infer<typeof LoanApplicationSchema>;
type LoanDecision = z.infer<typeof DecisionSchema>;
3) Build the underwriting prompt and chain
This pattern uses ChatPromptTemplate plus RunnableLambda so you can enforce pre-checks before the model runs. That matters in lending because some rules should never be delegated to the LLM.
const underwritingPrompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a loan underwriting assistant.
Use only the provided application data and policy context.
Return a concise recommendation with compliance-safe reasons.
If required information is missing, set decision to "refer".`,
],
[
"human",
`Application:
{application}
Policy context:
{policyContext}
Risk rules:
- Debt-to-income above 45% => refer or decline depending on other risk signals
- Missing income verification => refer
- Country mismatch with supported jurisdictions => decline
- Never invent facts not present in the application`,
],
]);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const validateAndScore = new RunnableLambda({
func: async (input: LoanApplication) => {
const dti =
input.monthlyDebtPayments / Math.max(input.monthlyIncome, 1) * 100;
return {
...input,
dti: Number(dti.toFixed(2)),
precheck:
input.monthlyIncome <= 0 || input.requestedAmount <= 0
? "invalid"
: dti > 45
? "high_dti"
: "ok",
};
},
});
const formatDecisionInput = new RunnableLambda({
func: async (input: LoanApplication & { dti: number; precheck: string }) => ({
application: JSON.stringify(input, null, 2),
policyContext:
"- Maximum unsecured personal loan amount: $50,000\n" +
"- Minimum verified income required\n" +
"- Supported jurisdictions: US, UK, CA\n" +
"- Manual review required for self-employed applicants over $25,000",
precheck: input.precheck,
dti: input.dti,
}),
});
const chain = validateAndScore.pipe(formatDecisionInput).pipe(underwritingPrompt).pipe(llm);
4) Enforce structured output and run the agent
Use withStructuredOutput() so the model returns a typed object matching your schema. That gives you predictable downstream behavior for case management systems and audit logging.
const structuredModel = llm.withStructuredOutput(DecisionSchema);
async function approveLoan(applicationInput: unknown): Promise<LoanDecision> {
const application = LoanApplicationSchema.parse(applicationInput);
const prepared = await validateAndScore.invoke(application);
if (prepared.precheck === "invalid") {
return {
decision: "decline",
riskBand: "high",
reasons: ["Invalid or missing core financial fields"],
policyFlags: ["data_quality_failure"],
};
}
if (prepared.precheck === "high_dti") {
return {
decision: "refer",
riskBand: "medium",
reasons: ["Debt-to-income ratio exceeds threshold"],
policyFlags: ["dti_over_45"],
};
}
const promptInput = await formatDecisionInput.invoke(prepared);
return structuredModel.invoke(await underwritingPrompt.formatMessages(promptInput)) as Promise<LoanDecision>;
}
const result = await approveLoan({
applicantName:"Maya Chen",
monthlyIncome":8000,
monthlyDebtPayments":2200,
requestedAmount":15000,
termMonths":36,
employmentStatus":"employed",
country":"US"
});
console.log(result);
The important part is not just “call an LLM.” It is to place deterministic controls before generation and force typed output after generation. In lending workflows that means fewer hallucinated approvals and cleaner handoff into LOS/CRM systems.
Production Considerations
- •Deploy regionally
- •Keep prompts, embeddings, logs, and vector stores in approved regions. If your lending policy requires data residency in the EU or UK, do not ship customer PII to a cross-border hosted service without legal review.
- •Log every decision path
- •Store application inputs, DTI calculation results, retrieved policy text, model version, prompt version, and final output. Auditors will ask why a case was approved or referred six months later.
- •Add guardrails for regulated decisions
- •Block sensitive attributes from prompts unless explicitly allowed by your compliance team. Do not pass protected class data into the model just because it exists upstream.
- •Monitor drift
- •Track approval rate by product type, geography, income band, and manual referral rate. If one segment suddenly changes behavior after a prompt tweak or model update, treat it like a production incident.
Common Pitfalls
- •
Letting the LLM make deterministic credit-policy calls
- •Fix by encoding hard rules in code first. Use the model for synthesis and explanation, not for threshold math or jurisdiction checks.
- •
Skipping structured outputs
- •Free-form text breaks downstream automation fast. Use Zod schemas with
withStructuredOutput()so your service can reliably route approve/refer/decline outcomes.
- •Free-form text breaks downstream automation fast. Use Zod schemas with
- •
Ignoring audit requirements
- •If you cannot reconstruct why a decision happened, you do not have a lending system you can defend. Persist prompt versions, retrieved documents, model name, timestamped inputs/outputs, and any human override.
- •
Mixing PII into prompts without controls
- •Redact what you do not need. For most underwriting flows you need financial facts and identity verification status; you do not need full raw documents in every prompt call.
A loan approval agent built this way is useful because it sits inside existing underwriting controls instead of trying to replace them. That is how you get faster decisions without breaking compliance or creating unexplainable credit outcomes.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit