How to Build a fraud detection Agent Using LangChain in TypeScript for lending
A fraud detection agent for lending takes a loan application, checks it against policy and risk signals, and returns a decision-ready assessment with reasons. It matters because lending fraud is usually not a single obvious signal; it’s a mix of identity mismatch, synthetic identity patterns, device/IP anomalies, document inconsistencies, and policy violations that need to be evaluated consistently and logged for audit.
Architecture
- •
Input normalizer
- •Converts raw application payloads into a stable internal schema.
- •Validates required fields like identity data, income, employment, device metadata, and consent flags.
- •
Fraud policy retriever
- •Pulls the latest lending fraud rules from a controlled knowledge base.
- •Keeps model behavior aligned with compliance updates without hardcoding policy text into prompts.
- •
LLM reasoning layer
- •Uses LangChain to classify risk patterns and explain why an application looks suspicious.
- •Produces structured output instead of free-form prose.
- •
Tool layer
- •Queries internal systems: bureau checks, KYC/AML status, device reputation, velocity checks, address verification.
- •Lets the agent ground its decision in real evidence.
- •
Decision engine
- •Combines model output with deterministic rules.
- •Returns
approve,review, ordeclinewith reason codes.
- •
Audit logger
- •Stores inputs, retrieved policy versions, tool results, and final decisions.
- •Supports compliance review, model governance, and dispute handling.
Implementation
1) Define the application schema and decision contract
Start by forcing the agent to speak in structured JSON. In lending, you want stable outputs that downstream systems can consume and audit.
import { z } from "zod";
export const LoanApplicationSchema = z.object({
applicationId: z.string(),
fullName: z.string(),
nationalId: z.string(),
dateOfBirth: z.string(),
email: z.string().email(),
phone: z.string(),
address: z.string(),
employerName: z.string().optional(),
monthlyIncome: z.number().positive(),
requestedAmount: z.number().positive(),
deviceId: z.string().optional(),
ipAddress: z.string().optional(),
});
export const FraudDecisionSchema = z.object({
riskLevel: z.enum(["low", "medium", "high"]),
action: z.enum(["approve", "review", "decline"]),
reasonCodes: z.array(z.string()).min(1),
summary: z.string(),
});
2) Build the LangChain chain with structured output
Use ChatOpenAI plus withStructuredOutput() so the model returns a typed object. This is the right pattern when your result feeds underwriting or case management.
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
import { LoanApplicationSchema, FraudDecisionSchema } from "./schemas";
type LoanApplication = ReturnType<typeof LoanApplicationSchema.parse>;
const llm = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
const prompt = PromptTemplate.fromTemplate(`
You are a lending fraud analyst.
Assess this application for fraud risk using only the provided data and policy context.
Application:
{application}
Policy Context:
{policyContext}
Return a structured decision with riskLevel, action, reasonCodes, summary.
`);
const fraudAgent = prompt.pipe(
llm.withStructuredOutput(FraudDecisionSchema)
);
export async function assessFraud(applicationInput: unknown) {
const application = LoanApplicationSchema.parse(applicationInput);
const policyContext = `
- Reject applications with identity inconsistency across name/DOB/ID.
- Escalate if device or IP has prior fraud flags.
- Review if income is high relative to profile but employment cannot be verified.
- Decline if synthetic identity indicators are strong.
`;
const result = await fraudAgent.invoke({
application: JSON.stringify(application),
policyContext,
});
return result;
}
3) Add tools for real evidence
In production lending workflows, the LLM should not invent facts. Use tools for bureau or KYC lookups and pass those results into the chain. Here’s a simple pattern using DynamicTool.
import { DynamicTool } from "@langchain/core/tools";
const kycLookupTool = new DynamicTool({
name: "kyc_lookup",
description: "Fetch KYC verification status for an applicant by national ID",
func: async (nationalId: string) => {
// Replace with your internal service call
const record = await fetch(`https://internal-api.example.com/kyc/${nationalId}`);
if (!record.ok) throw new Error("KYC lookup failed");
return JSON.stringify(await record.json());
},
});
const deviceRiskTool = new DynamicTool({
name: "device_risk_lookup",
description: "Fetch device reputation data by device ID or IP address",
func: async (input: string) => {
const res = await fetch(`https://internal-api.example.com/device-risk?query=${encodeURIComponent(input)}`);
if (!res.ok) throw new Error("Device risk lookup failed");
return JSON.stringify(await res.json());
},
});
Then enrich the prompt input before calling the model:
export async function assessFraudWithSignals(applicationInput: unknown) {
const application = LoanApplicationSchema.parse(applicationInput);
const [kycResult, deviceResult] = await Promise.all([
kycLookupTool.invoke(application.nationalId),
application.deviceId || application.ipAddress
? deviceRiskTool.invoke(application.deviceId ?? application.ipAddress!)
: Promise.resolve(JSON.stringify({ status: "not_provided" })),
]);
return fraudAgent.invoke({
application: JSON.stringify(application),
policyContext,
kycResult,
deviceResult,
});
}
4) Add deterministic guardrails before final decision
Do not let the LLM override hard business rules. In lending, some conditions must force manual review or decline regardless of narrative quality.
export function applyHardRules(
decision: { riskLevel: string; action: string; reasonCodes: string[]; summary: string },
): typeof decision {
const forcedReviewReasons = ["KYC_UNVERIFIED", "DEVICE_HIGH_RISK", "ID_MISMATCH"];
if (decision.reasonCodes.some((code) => forcedReviewReasons.includes(code))) {
return {
...decision,
riskLevel: "high",
action: "review",
reasonCodes: Array.from(new Set([...decision.reasonCodes, "HARD_RULE_TRIGGERED"])),
summary:
decision.summary + " Final action overridden to review by deterministic lending policy.",
};
}
return decision;
}
Production Considerations
- •
Keep data residency explicit
- •Route PII to approved regional infrastructure only.
- •If your lender operates across jurisdictions, pin model endpoints and vector stores to region-specific deployments.
- •
Log for auditability
- •Store input hashes, retrieved policy version IDs, tool outputs, prompt templates, and final decisions.
- •Regulators will ask why an applicant was flagged; you need traceable evidence chains.
- •
Separate recommendation from adverse action
- •The agent should recommend
reviewordecline, but final adverse action may require additional rule checks and human approval depending on jurisdiction. - •Keep explanation text compliant and tied to actual reason codes.
- •The agent should recommend
- •
Monitor drift on fraud patterns
- •Track false positives by segment, channel, geography, and product type.
- •Fraud adapts quickly; what works on personal loans may fail on BNPL or SME lending.
Common Pitfalls
- •
Letting the LLM make final credit decisions
- •Fix this by making the model produce only a structured recommendation.
- •Put approval thresholds and mandatory declines in deterministic code outside the chain.
- •
Sending raw PII into prompts without controls
- •Mask unnecessary fields like full national IDs or exact addresses when they are not needed for reasoning.
- •Use field-level redaction before calling
invoke(), especially when working under strict privacy or residency rules.
- •
No evidence trail for compliance reviews
- •If you only store the final answer, you cannot defend it later.
- •Persist tool outputs, policy snapshot IDs, timestamps, and model version so audit can reconstruct exactly how the agent reached its recommendation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit