How to Build a loan approval Agent Using LangGraph in TypeScript for insurance
A loan approval agent for insurance decides whether a customer’s financing request can move forward, needs manual review, or must be rejected based on policy rules, risk signals, and document evidence. In insurance, this matters because lending decisions often sit next to regulated workflows: you need traceability, consistent underwriting logic, and a clean audit trail for every recommendation the agent makes.
Architecture
- •State model
- •Holds applicant data, policy metadata, extracted documents, risk scores, decision outcome, and audit notes.
- •Classifier node
- •Normalizes the request into one of three paths:
approve,review, orreject.
- •Normalizes the request into one of three paths:
- •Policy/rules node
- •Applies insurance-specific constraints like eligibility thresholds, jurisdiction rules, and product exclusions.
- •Document verification node
- •Checks submitted income proof, identity data, and policy documents before any approval path.
- •Human review handoff
- •Routes ambiguous cases to an underwriter or ops queue with the evidence attached.
- •Audit logger
- •Persists every state transition and decision reason for compliance and post-incident review.
Implementation
1) Define the state and build the graph
Use a typed state object so every node reads and writes the same contract. For insurance workflows, keep the decision trace in state from the start.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
type Decision = "approve" | "review" | "reject";
const LoanState = Annotation.Root({
applicant: Annotation<any>(),
policy: Annotation<any>(),
documents: Annotation<string[]>({ default: () => [] }),
riskScore: Annotation<number>({ default: () => 0 }),
decision: Annotation<Decision | null>({ default: () => null }),
reasons: Annotation<string[]>({ default: () => [] }),
auditTrail: Annotation<string[]>({ default: () => [] }),
});
const graph = new StateGraph(LoanState);
2) Add nodes for document checks and policy evaluation
Keep these nodes deterministic where possible. In regulated insurance flows, deterministic rules are easier to test and defend than opaque model output.
const verifyDocuments = async (state: typeof LoanState.State) => {
const reasons = [...state.reasons];
const auditTrail = [...state.auditTrail];
if (!state.documents.includes("id_proof")) {
reasons.push("Missing identity proof");
auditTrail.push("verifyDocuments: missing id_proof");
}
if (!state.documents.includes("income_statement")) {
reasons.push("Missing income statement");
auditTrail.push("verifyDocuments: missing income_statement");
}
return {
reasons,
auditTrail,
decision:
reasons.length > 0 ? ("review" as const) : state.decision,
};
};
const applyPolicyRules = async (state: typeof LoanState.State) => {
const reasons = [...state.reasons];
const auditTrail = [...state.auditTrail];
if (state.applicant.jurisdiction === "restricted_region") {
reasons.push("Jurisdiction not eligible under current policy");
auditTrail.push("applyPolicyRules: restricted jurisdiction");
return { decision: "reject" as const, reasons, auditTrail };
}
if (state.applicant.ltv > state.policy.maxLtv) {
reasons.push(`LTV above limit (${state.applicant.ltv} > ${state.policy.maxLtv})`);
auditTrail.push("applyPolicyRules: ltv exceeded");
return { decision: "review" as const, reasons, auditTrail };
}
return { reasons, auditTrail };
};
3) Classify risk and route with addConditionalEdges
Use a small scoring step to keep the graph explainable. The point is not to replace underwriting; it is to standardize triage before a human sees the case.
const scoreRisk = async (state: typeof LoanState.State) => {
let score = state.riskScore;
const auditTrail = [...state.auditTrail];
score += state.applicant.creditScore < 600 ? 40 : state.applicant.creditScore < 700 ? 20 : -10;
score += state.applicant.claimHistoryCount > 2 ? 25 : 0;
auditTrail.push(`scoreRisk: computed score=${score}`);
return { riskScore: Math.max(0, score), auditTrail };
};
const decideRoute = (state: typeof LoanState.State) => {
if (state.decision === "reject") return END;
if (state.reasons.length > 0) return "humanReview";
if (state.riskScore >= thresholdForReview(state.policy)) return "humanReview";
return "finalApprove";
};
function thresholdForReview(policy: any) {
return policy.reviewRiskThreshold ?? 50;
}
Now wire the graph together.
graph.addNode("verifyDocuments", verifyDocuments);
graph.addNode("applyPolicyRules", applyPolicyRules);
graph.addNode("scoreRisk", scoreRisk);
graph.addNode("humanReview", async (state) => ({
decision: "review" as const,
auditTrail: [...state.auditTrail, "humanReview: queued for underwriter"],
}));
graph.addNode("finalApprove", async (state) => ({
decision: "approve" as const,
auditTrail: [...state.auditTrail, "finalApprove: auto-approved"],
}));
graph.addEdge(START, "verifyDocuments");
graph.addEdge("verifyDocuments", "applyPolicyRules");
graph.addEdge("applyPolicyRules", "scoreRisk");
graph.addConditionalEdges("scoreRisk", decideRoute);
graph.addEdge("humanReview", END);
graph.addEdge("finalApprove", END);
const app = graph.compile();
4) Run the agent with an auditable input
The compiled graph returns the full final state. Persist that result in your case management system so compliance teams can reconstruct why a loan was approved or escalated.
const result = await app.invoke({
applicant: {
creditScore: 682,
jurisdiction: "us_ny",
ltv: 0.62,
claimHistoryCount: 1,
},
policy: {
maxLtv: 0.7,
reviewRiskThreshold: 55,
},
documents: ["id_proof", "income_statement"],
});
console.log(result.decision); // approve | review | reject
console.log(result.auditTrail); // full trace for compliance
console.log(result.reasons); // explanation payload
Production Considerations
- •Deploy in-region
- •If applicant data includes PII or claims history, keep execution in approved regions that match your data residency obligations.
- •Store immutable traces
audit trails should be append-only and linked to a case ID. Regulators will care about who changed what and when.
- •Add human-in-the-loop thresholds
anything involving borderline creditworthiness, adverse action triggers, or jurisdiction-specific restrictions should route to manual review.
- •Instrument every node
emit metrics for node latency, rejection rate, review rate, and missing-document frequency. Sudden drift often shows up first in these counters.
Common Pitfalls
- •Letting an LLM make final approval decisions
use the model for extraction or summarization only. Final decisions should come from explicit rules plus human review thresholds.
- •Not versioning policy logic
insurance policies change often. Version your thresholds and rule sets so you can reproduce historical decisions during audits.
- •Dropping evidence between nodes
if a node mutates state without preserving reasons and document references, you lose explainability. Keep reasons and auditTrail in every branch.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit