How to Build a loan approval Agent Using LangChain in TypeScript for payments
A loan approval agent for payments is the orchestration layer that takes an application, checks policy, scores risk, and returns a decision with an audit trail. It matters because payment flows need fast yes/no decisions, but they also need explainability, compliance, and a clean handoff to downstream systems like core banking, KYC, and ledger services.
Architecture
- •
Input adapter
- •Normalizes application payloads from checkout, merchant onboarding, or internal ops tools.
- •Validates required fields before any LLM call.
- •
Policy retrieval layer
- •Pulls underwriting rules, product eligibility, and jurisdiction-specific constraints.
- •Keeps policy outside the prompt so it can be versioned and audited.
- •
Decision chain
- •Uses LangChain to classify the application into
approve,review, orreject. - •Produces structured output instead of free-form text.
- •Uses LangChain to classify the application into
- •
Risk and compliance tools
- •Calls KYC/AML checks, fraud scores, exposure limits, and blacklist services.
- •Ensures the agent does not make decisions from model memory alone.
- •
Audit logger
- •Persists inputs, policy versions, model version, tool outputs, and final decision.
- •Required for disputes, internal review, and regulator requests.
- •
Action router
- •Sends approved cases to payment origination or underwriting queues.
- •Sends borderline cases to human review with a reason code.
Implementation
1) Install the LangChain packages you actually need
For a TypeScript service, keep the dependency surface small. You want the core chain primitives plus a chat model provider and structured output support.
npm install langchain @langchain/openai zod
Set your model key in environment variables:
export OPENAI_API_KEY="your-key"
2) Define a strict decision schema
Payments systems should not parse loose prose. Use zod so the agent returns a typed object with a decision, reason codes, and audit metadata.
import { z } from "zod";
export const LoanDecisionSchema = z.object({
decision: z.enum(["approve", "review", "reject"]),
amountApproved: z.number().nonnegative(),
reasonCodes: z.array(z.string()).min(1),
confidence: z.number().min(0).max(1),
policyVersion: z.string(),
});
export type LoanDecision = z.infer<typeof LoanDecisionSchema>;
3) Build the agent chain with LangChain
This pattern uses ChatOpenAI plus withStructuredOutput() so the model must conform to your schema. The prompt includes only the minimum policy context needed for the request.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { LoanDecisionSchema } from "./schema";
type LoanApplication = {
applicantId: string;
country: string;
requestedAmount: number;
monthlyIncome: number;
existingDebt: number;
kycStatus: "pass" | "fail" | "pending";
fraudScore: number;
};
const prompt = PromptTemplate.fromTemplate(`
You are a loan approval assistant for payments.
Return only structured output that matches the schema.
Policy:
- Reject if KYC failed.
- Review if KYC pending or fraudScore >= 0.7.
- Approve only if debt-to-income ratio is <= 0.35 and requestedAmount <= income * 10.
- Never approve above product limit of {productLimit}.
- If uncertain, choose review.
Application:
{application}
`);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
}).withStructuredOutput(LoanDecisionSchema);
export const loanApprovalChain = RunnableSequence.from([
async (input: { application: LoanApplication; productLimit: number }) => ({
application: JSON.stringify(input.application),
productLimit: input.productLimit.toString(),
rawApplication: input.application,
productLimitNumber: input.productLimit,
}),
prompt,
llm,
]);
4) Execute the chain and persist an audit record
In production you need the decision plus enough context to reconstruct why it happened. Keep the audit record immutable and include policy/model versions.
import { loanApprovalChain } from "./agent";
async function main() {
const application = {
applicantId: "app_123",
country: "KE",
requestedAmount: 50000,
monthlyIncome: 220000,
existingDebt: 40000,
kycStatus: "pass",
fraudScore: 0.18,
} as const;
const decision = await loanApprovalChain.invoke({
application,
productLimit: 100000,
});
const auditRecord = {
applicantId: application.applicantId,
requestAt: new Date().toISOString(),
model: "gpt-4o-mini",
policyVersion: decision.policyVersion,
inputSnapshot: application,
decision,
channel: "payments-loan-origination",
};
console.log(JSON.stringify(auditRecord, null, 2));
}
main().catch(console.error);
Production Considerations
- •
Put hard rules outside the model
Compliance checks like KYC fail-fast logic, country restrictions, and exposure limits should run before the LLM. The model should recommend within policy boundaries, not invent them.
- •
Log every decision with versioned artifacts
Store prompt version, policy version, model name, tool outputs, and final structured response. For payments teams this is not optional; it is how you answer disputes and internal audits.
- •
Keep data residency explicit
If customer data must stay in-region, route inference through a provider region that matches your residency requirement or use self-hosted models in your VPC. Do not send raw PII across regions just because the default endpoint is convenient.
- •
Add human review for borderline cases
Anything with pending KYC, high fraud score, missing income proof, or low confidence should go to ops review. The agent should route cases; it should not override risk controls.
Common Pitfalls
- •
Letting the LLM decide without deterministic guardrails
- •Mistake: asking the model to “approve or reject” with no rule engine around it.
- •Fix: enforce hard checks in code first, then let LangChain handle structured recommendation inside those constraints.
- •
Using free-form text instead of structured output
- •Mistake: parsing natural language decisions from
.invoke()output. - •Fix: use
withStructuredOutput()with azodschema so your downstream systems get typed fields they can trust.
- •Mistake: parsing natural language decisions from
- •
Ignoring auditability and regional compliance
- •Mistake: logging only the final answer while dropping inputs and policy versions.
- •Fix: persist immutable audit records with request snapshot, model version, policy version, reason codes, and residency-aware storage location.
- •
Overexposing sensitive payment data to prompts
- •Mistake: stuffing full bank statements or raw identifiers into every call.
- •Fix: redact unnecessary PII before inference and pass only fields needed for underwriting decisions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit