How to Build a loan approval Agent Using LangGraph in TypeScript for wealth management
A loan approval agent in wealth management is not a chatbot that says “yes” or “no.” It’s a workflow engine that gathers client data, checks policy and risk rules, routes edge cases to humans, and leaves behind an audit trail you can defend to compliance. That matters because lending decisions in wealth management sit at the intersection of suitability, regulatory oversight, and client trust.
Architecture
For this agent, keep the graph small and explicit.
- •State model
- •Holds applicant data, KYC status, portfolio exposure, credit signals, decision outcome, and audit metadata.
- •Document intake node
- •Normalizes inputs from CRM, PDF statements, tax docs, and internal portfolio systems.
- •Policy/risk evaluation node
- •Applies deterministic rules for debt-to-income, collateral coverage, concentration limits, and jurisdiction-specific constraints.
- •LLM reasoning node
- •Summarizes edge cases and drafts a recommendation, but never makes the final decision alone.
- •Human review node
- •Routes exceptions: missing documents, policy conflicts, high-value requests, or adverse findings.
- •Audit/logging layer
- •Stores every state transition with timestamps, reason codes, and data source references for compliance review.
Implementation
1) Define the graph state and decision types
Use a typed state so every node knows exactly what it can read and write. In wealth management systems, this prevents accidental leakage of unapproved fields into downstream steps.
import { Annotation } from "@langchain/langgraph";
export type LoanDecision = "approve" | "reject" | "review";
export const LoanState = Annotation.Root({
applicantId: Annotation<string>(),
requestedAmount: Annotation<number>(),
annualIncome: Annotation<number>(),
existingDebt: Annotation<number>(),
collateralValue: Annotation<number>(),
kycPassed: Annotation<boolean>(),
portfolioAum: Annotation<number>(),
jurisdiction: Annotation<string>(),
riskScore: Annotation<number | null>(),
decision: Annotation<LoanDecision | null>(),
reasonCodes: Annotation<string[]>(),
auditTrail: Annotation<string[]>(),
});
2) Add deterministic policy checks first
Wealth management lending should not start with the model. Start with hard controls like KYC status, leverage thresholds, and concentration limits.
import { StateGraph, START, END } from "@langchain/langgraph";
const policyCheck = async (state: typeof LoanState.State) => {
const reasonCodes = [...state.reasonCodes];
let decision: "approve" | "reject" | "review" | null = null;
if (!state.kycPassed) {
reasonCodes.push("KYC_FAILED");
decision = "reject";
}
const dti = state.existingDebt / Math.max(state.annualIncome, 1);
if (dti > 0.45) {
reasonCodes.push("DTI_TOO_HIGH");
decision = decision ?? "review";
}
const ltv = state.requestedAmount / Math.max(state.collateralValue * 0.8, 1);
if (ltv > 0.75) {
reasonCodes.push("LTV_TOO_HIGH");
decision = decision ?? "review";
}
return {
riskScore: Math.round((dti + ltv) * 50),
decision,
reasonCodes,
auditTrail: [...state.auditTrail, `policyCheck:dti=${dti.toFixed(2)} ltv=${ltv.toFixed(2)}`],
};
};
3) Add an LLM node only for explanation and edge-case synthesis
The model should produce a concise recommendation based on already computed facts. Keep it constrained; do not let it invent policy.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const llmReview = async (state: typeof LoanState.State) => {
const prompt = `
You are assisting a wealth management credit analyst.
Given these facts:
- applicantId: ${state.applicantId}
- requestedAmount: ${state.requestedAmount}
- annualIncome: ${state.annualIncome}
- existingDebt: ${state.existingDebt}
- collateralValue: ${state.collateralValue}
- kycPassed: ${state.kycPassed}
- portfolioAum: ${state.portfolioAum}
- jurisdiction: ${state.jurisdiction}
- riskScore: ${state.riskScore}
- currentDecision: ${state.decision}
Return one short recommendation sentence with a reason code style summary.
Do not override policy outcomes.
`;
const response = await llm.invoke(prompt);
return {
auditTrail: [...state.auditTrail, `llmReview:${response.content}`],
reasonCodes:
state.decision === "review"
? [...state.reasonCodes, "LLM_EDGE_CASE_SUMMARY"]
: state.reasonCodes,
};
};
4) Wire the graph and route exceptions to human review
Use StateGraph to make the flow explicit. In production this is where you keep approval logic readable for auditors.
const needsHumanReview = (state: typeof LoanState.State) => state.decision === "review";
const humanReview = async (state: typeof LoanState.State) => ({
decision:
state.reasonCodes.includes("KYC_FAILED") ? ("reject" as const) : ("approve" as const),
reasonCodes:
[...state.reasonCodes, "HUMAN_REVIEW_COMPLETE"],
auditTrail:
[...state.auditTrail, `humanReview:${state.applicantId}`],
});
const graph = new StateGraph(LoanState)
.addNode("policyCheck", policyCheck)
.addNode("llmReview", llmReview)
.addNode("humanReview", humanReview)
.addEdge(START, "policyCheck")
.addConditionalEdges("policyCheck", needsHumanReview ? "llmReview" : END)
.addConditionalEdges("llmReview", needsHumanReview ? "humanReview" : END)
.addEdge("humanReview", END);
export const loanApprovalApp = graph.compile();
Step-by-step execution pattern
Run the compiled graph with only the fields needed for the decision. Persist the final state plus all intermediate audit entries in your own datastore.
const result = await loanApprovalApp.invoke({
applicantId: "app_123",
requestedAmount:300000,
annualIncome:180000,
existingDebt60000,
collateralValue500000,
kycPassed:true,
portfolioAum2500000,
jurisdiction:"US-NY",
riskScore:null,
decision:null,
reasonCodes[],
auditTrail:[]
});
console.log(result.decision);
console.log(result.reasonCodes);
console.log(result.auditTrail);
Production Considerations
- •Deployment
- •Keep the graph service inside your regulated boundary if you handle PII or portfolio data.
- •For cross-border clients, enforce data residency by region before invoking any model endpoint.
- •Monitoring
- •Log every node transition with applicant ID, rule hits, model version, and final reviewer identity.
- •Track rejection rates by jurisdiction and product type so compliance can spot drift early.
- •Guardrails
- •Make hard rules deterministic and testable; never outsource KYC or suitability checks to the model.
- •Redact account numbers, tax IDs, and statement details before sending prompts to any external LLM.
- •Auditability
- •Store immutable evidence for each decision path:
- •input snapshot hash
- •rule outputs
- •human override reason
- •timestamped final disposition
- •Store immutable evidence for each decision path:
Common Pitfalls
- •
Letting the LLM make the final call
- •Fix it by making policy nodes authoritative and using the model only for summarization or exception handling.
- •
Skipping jurisdiction-specific rules
- •A loan acceptable in one region may violate lending or suitability constraints in another.
- •Encode jurisdiction as first-class state and branch rules accordingly.
- •
Not persisting intermediate state
- •If you only store the final result, compliance cannot reconstruct why a loan was rejected or approved.
- •Persist each node output with a trace ID tied to your case management system.
- •
Mixing portfolio advisory data with lending logic
- •Wealth management clients often have concentrated positions or managed accounts that affect risk decisions.
- •Separate advisory signals from credit policy inputs so you don’t create hidden conflicts of interest.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit