How to Build a fraud detection Agent Using LangGraph in TypeScript for lending
A fraud detection agent for lending triages applications, flags suspicious patterns, and routes risky cases for manual review before money moves. In lending, that matters because bad decisions are expensive in two directions: approving fraudulent borrowers increases losses, while false positives slow down legitimate customers and hurt conversion.
Architecture
- •
Input normalization layer
- •Takes raw application data from web forms, CRM events, bureau pulls, and device signals.
- •Converts them into a stable internal schema so the graph does not depend on upstream payload shape.
- •
Rules and risk scoring node
- •Applies deterministic checks first: identity mismatch, velocity checks, duplicate SSN/email/device, suspicious income patterns.
- •Produces a structured risk signal that is easy to audit.
- •
LLM analysis node
- •Summarizes the case and explains why the application looks suspicious.
- •Only sees sanitized fields needed for reasoning, not full PII dumps.
- •
Decision router
- •Routes to approve, reject, or manual review.
- •Uses thresholds and policy rules, not free-form model output.
- •
Audit logger
- •Captures inputs, outputs, and decision reasons for compliance review.
- •Needed for lending audits, adverse action workflows, and model governance.
- •
Human review handoff
- •Packages the case for underwriters or fraud analysts with evidence and explanation.
- •Keeps humans in the loop for borderline or high-risk applications.
Implementation
1) Define the state and graph nodes
Use a typed state so every node knows exactly what it can read and write. For lending workflows, keep raw PII out of the LLM path and store only what each node needs.
import { Annotation, END, StateGraph } from "@langchain/langgraph";
type Application = {
applicantId: string;
ssnLast4: string;
email: string;
deviceId: string;
incomeMonthly: number;
requestedAmount: number;
state: string;
};
type FraudState = {
application: Application;
riskScore: number;
flags: string[];
llmSummary?: string;
decision?: "approve" | "reject" | "manual_review";
};
const GraphState = Annotation.Root({
application: Annotation<Application>(),
riskScore: Annotation<number>({ default: () => 0 }),
flags: Annotation<string[]>({ default: () => [] }),
llmSummary: Annotation<string | undefined>(),
decision: Annotation<"approve" | "reject" | "manual_review" | undefined>(),
});
2) Add deterministic fraud checks before any model call
This is the part most teams skip. Don’t start with the LLM; start with rules that are explainable and cheap to run.
const rulesNode = async (state: typeof GraphState.State) => {
const { application } = state;
const flags: string[] = [];
let riskScore = state.riskScore;
if (application.requestedAmount > application.incomeMonthly * 12) {
flags.push("high_loan_to_income");
riskScore += 30;
}
if (application.email.endsWith("@mailinator.com")) {
flags.push("disposable_email");
riskScore += 20;
}
if (application.deviceId.startsWith("emulator-")) {
flags.push("suspicious_device");
riskScore += 25;
}
return { riskScore, flags };
};
3) Add an LLM analysis node for explanation, not final authority
The model should produce a concise explanation that helps investigators. Keep it bounded by policy so it cannot override hard rules like sanctions hits or identity mismatches.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const analyzeNode = async (state: typeof GraphState.State) => {
const prompt = `
You are a lending fraud analyst.
Summarize why this application may be fraudulent using only these signals:
${JSON.stringify({
riskScore: state.riskScore,
flags: state.flags,
state: state.application.state,
requestedAmount: state.application.requestedAmount,
incomeMonthly: state.application.incomeMonthly,
})}
Return a short explanation suitable for an internal review queue.
`;
const response = await llm.invoke(prompt);
return { llmSummary: response.content.toString() };
};
4) Route to approve, reject, or manual review
Use StateGraph plus addConditionalEdges so routing stays explicit. That gives you a clean audit trail and avoids burying policy inside prompt text.
const routeDecision = (state: typeof GraphState.State) => {
if (state.riskScore >= thresholdReject) return "reject";
if (state.riskScore >= thresholdReview) return "manual_review";
return "approve";
};
const thresholdReject = process.env.REJECT_THRESHOLD
? Number(process.env.REJECT_THRESHOLD)
: #?
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit