How to Build a fraud detection Agent Using LangChain in TypeScript for payments
A fraud detection agent for payments takes transaction data, customer context, and policy rules, then returns a risk decision with an explanation. It matters because payment fraud is a latency-sensitive problem: you need to catch bad activity before authorization or fulfillment, while keeping false positives low enough that real customers do not get blocked.
Architecture
- •
Transaction ingest layer
- •Receives payment events from your checkout, PSP webhook, or internal ledger.
- •Normalizes fields like amount, currency, merchant category, device fingerprint, BIN, country, and velocity counters.
- •
Risk context retriever
- •Pulls recent account activity, chargeback history, device reputation, and policy snippets.
- •Usually backed by a database or vector store for historical cases and playbooks.
- •
LLM reasoning layer
- •Uses LangChain to classify the transaction into
approve,review, orblock. - •Produces a short rationale that analysts can audit later.
- •Uses LangChain to classify the transaction into
- •
Policy and guardrail layer
- •Enforces hard rules outside the model: sanctions hits, blacklisted cards, impossible geolocation jumps.
- •Keeps the agent from overriding compliance controls.
- •
Decision logger
- •Stores inputs, outputs, model version, prompt version, and rule triggers.
- •Needed for auditability in regulated payments environments.
Implementation
1. Define the transaction schema and the decision output
Keep the model input structured. Payments systems fail when you pass raw prose into the chain and lose important fields like country mismatch or velocity signals.
import { z } from "zod";
export const TransactionSchema = z.object({
transactionId: z.string(),
customerId: z.string(),
amount: z.number().positive(),
currency: z.string().length(3),
merchantCategory: z.string(),
cardCountry: z.string().length(2),
ipCountry: z.string().length(2),
deviceId: z.string(),
velocity24h: z.number().int().nonnegative(),
chargebackRate90d: z.number().min(0).max(1),
});
export type Transaction = z.infer<typeof TransactionSchema>;
export const FraudDecisionSchema = z.object({
action: z.enum(["approve", "review", "block"]),
riskScore: z.number().min(0).max(100),
reasons: z.array(z.string()).min(1),
});
2. Build a LangChain chain with a structured output parser
Use ChatOpenAI plus StructuredOutputParser so the agent returns machine-readable JSON. This is the right pattern for payments because downstream systems need deterministic outputs.
import { ChatOpenAI } from "@langchain/openai";
import {
StructuredOutputParser,
} from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
import { TransactionSchema, FraudDecisionSchema } from "./schemas";
const parser = StructuredOutputParser.fromZodSchema(FraudDecisionSchema);
const prompt = ChatPromptTemplate.fromMessages([
["system", `
You are a fraud detection agent for card-not-present payments.
Use only the provided transaction facts and risk context.
Return a decision that is safe for regulated payments environments.
Never recommend approval if there is a sanctions or compliance hit.
${parser.getFormatInstructions()}
`],
["human", `
Transaction:
{transaction}
Risk context:
{riskContext}
`],
]);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
export const fraudChain = prompt.pipe(model).pipe(parser);
export async function scoreTransaction(input: unknown) {
const tx = TransactionSchema.parse(input);
const riskContext = JSON.stringify({
recentFailures: tx.velocity24h > 10,
highChargebackRisk: tx.chargebackRate90d > 0.08,
crossBorderMismatch: tx.cardCountry !== tx.ipCountry,
policyFlags: [],
});
return fraudChain.invoke({
transaction: JSON.stringify(tx),
riskContext,
});
}
3. Add hard rules before the LLM
Do not ask an LLM to decide on sanctions or obvious fraud patterns. Put those checks in code first, then let LangChain handle ambiguous cases that need contextual reasoning.
type HardRuleResult =
| { matched: true; action: "block"; reason: string }
| { matched: false };
function applyHardRules(tx: Transaction): HardRuleResult {
if (tx.amount >= 10000 && tx.cardCountry !== tx.ipCountry) {
return { matched: true, action: "block", reason: "Large cross-border mismatch" };
}
if (tx.velocity24h >= 25) {
return { matched: true, action: "block", reason: "Excessive velocity" };
}
return { matched: false };
}
export async function evaluatePayment(input: unknown) {
const tx = TransactionSchema.parse(input);
const rule = applyHardRules(tx);
if (rule.matched) {
return {
action: rule.action,
riskScore: 95,
reasons: [rule.reason],
source: "rules",
};
}
}
Complete the evaluator with the chain call:
export async function evaluatePayment(input: unknown) {
const tx = TransactionSchema.parse(input);
const rule = applyHardRules(tx);
if (rule.matched) {
return {
action: rule.action,
riskScore: 95,
reasons:[rule.reason],
source:"rules",
};
}
const result = await scoreTransaction(tx);
return {
...result,
source:"llm",
};
}
Step-by-step runtime pattern
In production you typically wire this into your payment service as:
- •Normalize the payment event.
- •Run deterministic policy checks.
- •Call the LangChain fraud chain only if rules do not already decide.
- •Persist the decision with full trace metadata.
That gives you lower latency and cleaner audit trails than sending every request through the model.
Production Considerations
- •Log every decision with trace metadata
Store transactionId, prompt version, model name, returned action, risk score, and hard-rule matches. In disputes or audits, you need to explain why a payment was blocked without reconstructing state from memory.
- •Separate compliance controls from model output
Sanctions screening, KYC flags, and residency restrictions should live outside the LLM path. The agent can suggest review, but it should never be able to bypass mandatory controls.
- •Control data residency and retention
Payment data often contains PAN-adjacent identifiers and regional personal data. Keep logs in-region where required by PCI DSS scope boundaries and local privacy laws such as GDPR or country-specific banking regulations.
- •Monitor false positives by segment
Track approval rate and manual review rate by merchant category, geography, issuer BIN range, and device type. A single global threshold hides bad behavior in one region while over-blocking another.
Common Pitfalls
- •Using free-form prompts instead of structured outputs
If you let the model answer in plain text, downstream code will eventually break on parsing edge cases. Use StructuredOutputParser.fromZodSchema() so your response shape stays stable.
- •Putting all fraud logic inside the LLM
This is how teams end up approving obvious fraud because the prompt was vague. Keep hard rules in code and use LangChain for contextual judgment only.
- •Ignoring auditability
If you cannot show why a transaction was blocked three months later, your ops team will hate this system. Store inputs, outputs, timestamps, policy versions, and model versions together.
A solid fraud detection agent for payments is not just an LLM wrapped around a prompt. It is a controlled decision system with structured inputs, deterministic guardrails, traceable outputs, and strict separation between policy enforcement and probabilistic reasoning.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit