How to Build a fraud detection Agent Using LangChain in TypeScript for fintech
A fraud detection agent in fintech watches transactions, customer context, and behavioral signals, then decides whether to approve, hold, escalate, or request more evidence. The point is not just catching fraud; it is doing it fast enough to reduce losses without creating a pile of false positives that kills conversion and support capacity.
Architecture
- •
Event ingestion layer
- •Receives transaction events from Kafka, Kinesis, webhook handlers, or batch jobs.
- •Normalizes raw payloads into a consistent schema before the agent sees them.
- •
Risk context retriever
- •Pulls customer history, device fingerprints, velocity stats, chargeback history, and KYC status.
- •Keeps the agent grounded in internal data instead of guessing from the transaction alone.
- •
LangChain reasoning layer
- •Uses
ChatOpenAIplus structured output parsing to classify risk and explain why. - •Produces a machine-readable decision object for downstream systems.
- •Uses
- •
Policy and compliance guardrails
- •Enforces rules like “never auto-block above a certain amount without human review.”
- •Keeps decisions auditable for model risk management, AML review, and regulator questions.
- •
Decision router
- •Converts the agent output into actions: approve, step-up auth, manual review, or block.
- •Integrates with your case management system and payment orchestration layer.
- •
Audit store
- •Persists input features, model output, policy version, prompt version, and final action.
- •Needed for post-incident review and regulatory traceability.
Implementation
1) Define the decision schema
Keep the model output strict. For fintech, free-form text is not enough because you need deterministic downstream behavior and an audit trail.
import { z } from "zod";
export const FraudDecisionSchema = z.object({
riskScore: z.number().min(0).max(100),
decision: z.enum(["approve", "review", "block"]),
reasons: z.array(z.string()).min(1),
requiresHumanReview: z.boolean(),
});
export type FraudDecision = z.infer<typeof FraudDecisionSchema>;
This schema becomes your contract between the LLM and your payment workflow. If the model drifts outside it, you fail closed.
2) Build the LangChain agent chain
Use ChatOpenAI with structured output so you get a typed result instead of parsing raw prose. This is the cleanest pattern for production fraud triage.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { RunnableLambda } from "@langchain/core/runnables";
import { FraudDecisionSchema } from "./fraud-schema";
type TransactionInput = {
transactionId: string;
userId: string;
amount: number;
currency: string;
country: string;
deviceId: string;
ipAddress: string;
accountAgeDays: number;
velocityLastHour: number;
chargebacksLast90Days: number;
};
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const fraudPrompt = `
You are a fraud analyst for a fintech company.
Classify the transaction using internal risk signals only.
Return a strict JSON object matching:
{ riskScore: number (0-100), decision: "approve" | "review" | "block", reasons: string[], requiresHumanReview: boolean }
Rules:
- Block if clear fraud indicators exist.
- Review if signals are mixed or policy thresholds are hit.
- Approve only if risk is low.
- Never mention unsupported facts.
`;
const fraudAgent = RunnableLambda.from(async (input: TransactionInput) => {
const result = await model.withStructuredOutput(FraudDecisionSchema).invoke([
{ role: "system", content: fraudPrompt },
{
role: "user",
content: JSON.stringify(input),
},
]);
return result;
});
async function main() {
const tx: TransactionInput = {
transactionId: "tx_123",
userId: "user_456",
amount: 2500,
currency: "USD",
country: "NG",
deviceId: "device_abc",
ipAddress: "203.0.113.10",
accountAgeDays: 2,
velocityLastHour: 7,
chargebacksLast90Days: 1,
};
const decision = await fraudAgent.invoke(tx);
console.log(decision);
}
main().catch(console.error);
The important part here is withStructuredOutput(FraudDecisionSchema). That gives you typed output with validation instead of hoping the model follows formatting instructions.
3) Add policy checks before execution
The LLM should not be your only control point. Put deterministic rules in front of actioning so compliance can sign off on behavior.
type FinalAction = FraudDecision["decision"] | "manual_override";
function applyPolicy(input: TransactionInput, decision: FraudDecision): FinalAction {
if (input.amount >= 10000) return "manual_override";
if (input.country !== "US" && input.accountAgeDays < 7 && decision.decision === "approve") {
return "manual_override";
}
if (decision.requiresHumanReview) return "review";
return decision.decision;
}
This is where fintech differs from generic AI apps. You want hard gates for high-value transactions, cross-border flows, sanctions-sensitive geographies, and new accounts.
4) Persist an audit record
Every decision needs a traceable record with versioned inputs and outputs. In regulated environments, “the model said so” is not an acceptable explanation.
type AuditRecord = {
transactionId : string;
promptVersion : string;
modelName : string;
input : TransactionInput;
decision : FraudDecision;
finalAction : FinalAction;
createdAt : string;
};
async function saveAudit(record : AuditRecord) {
console.log("AUDIT", JSON.stringify(record));
}
Store this in an immutable log or append-only table. Include policy version and prompt version so you can reproduce historical decisions during investigations.
Production Considerations
- •
Deploy in-region
- •Keep inference and logs inside approved data residency boundaries.
- •For EU customers or bank partners, make sure prompts and outputs do not leave the region unless contracts explicitly allow it.
- •
Monitor precision drift
- •Track false positives, false negatives, manual review rate, chargeback rate, and approval latency.
- •A fraud agent that blocks too much can cost more revenue than it saves in losses.
- •
Add guardrails around PII
- •Redact sensitive fields before sending them to the model when possible.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit