How to Build a fraud detection Agent Using LangGraph in TypeScript for payments

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlanggraphtypescriptpayments

A fraud detection agent for payments scores incoming transactions, checks them against policy and risk signals, and decides whether to approve, step-up authenticate, hold for review, or decline. In payments, that matters because the wrong decision costs money twice: first in fraud loss, then in false declines that kill conversion and customer trust.

Architecture

  • Transaction intake
    • Accepts a normalized payment event: amount, currency, merchant, card/customer IDs, device metadata, IP, velocity counters.
  • Risk enrichment
    • Pulls external and internal signals: account age, historical chargebacks, BIN country mismatch, device reputation, velocity rules.
  • LangGraph decision flow
    • Uses a StateGraph to route between approve, step-up auth, manual review, and decline paths.
  • Policy engine
    • Applies deterministic payment rules before or after model scoring.
    • Example: sanctions hit, blacklisted card fingerprint, or region-based restrictions.
  • Audit trail
    • Stores every input signal, intermediate score, and final action for compliance and dispute handling.
  • Human review handoff
    • Escalates borderline cases into a queue with enough context for an analyst to make a decision fast.

Implementation

1) Define the state and risk inputs

Keep the state explicit. For payments you want traceability more than clever abstractions.

import { StateGraph, START, END } from "@langchain/langgraph";

type PaymentEvent = {
  transactionId: string;
  amount: number;
  currency: string;
  merchantId: string;
  customerId: string;
  cardCountry?: string;
  ipCountry?: string;
  deviceFingerprint?: string;
};

type RiskSignal = {
  velocityScore: number;
  geoMismatch: boolean;
  chargebackRate: number;
};

type FraudState = {
  event: PaymentEvent;
  signals?: RiskSignal;
  fraudScore?: number;
  decision?: "approve" | "step_up" | "review" | "decline";
  reason?: string[];
};

const initialState: FraudState = {
  event: {
    transactionId: "txn_123",
    amount: 249.99,
    currency: "USD",
    merchantId: "m_001",
    customerId: "c_1001",
    cardCountry: "US",
    ipCountry: "NG",
    deviceFingerprint: "fp_abc",
  },
};

2) Add enrichment and scoring nodes

Use plain async functions as nodes. That keeps the graph easy to test and easy to swap with real services.

const enrichRisk = async (state: FraudState): Promise<Partial<FraudState>> => {
  const { event } = state;

  // Replace with real calls to your risk services / feature store
  const signals: RiskSignal = {
    velocityScore: event.amount > 200 ? 0.7 : 0.2,
    geoMismatch:
      Boolean(event.cardCountry) &&
      Boolean(event.ipCountry) &&
      event.cardCountry !== event.ipCountry,
    chargebackRate: event.merchantId === "m_001" ? 0.03 : 0.01,
  };

  return { signals };
};

const scoreFraud = async (state: FraudState): Promise<Partial<FraudState>> => {
  const s = state.signals!;
  let score = s.velocityScore * 40 + s.chargebackRate * 100;

  if (s.geoMismatch) score += 35;

const reasons = [];
if (s.velocityScore > 0.5) reasons.push("high_velocity");
if (s.geoMismatch) reasons.push("geo_mismatch");
if (s.chargebackRate > 0.02) reasons.push("merchant_chargeback_risk");

return {
    fraudScore: Math.min(100, Math.round(score)),
    reason: reasons,
};
};

3) Route decisions with StateGraph

This is the core pattern. Score first, then route based on thresholds that match your payment policy.

const decide = async (state: FraudState): Promise<Partial<FraudState>> => {
return {};
};

const routeByRisk = (state: FraudState) => {
const score = state.fraudScore ?? 0;

if (score >= 80) return "decline";
if (score >=70) return "review";
if (score >=40) return "step_up";
return "approve";
};

const approve = async (): Promise<Partial<FraudState>> => ({
decision:"approve",
reason:["below_risk_threshold"],
});

const stepUpAuth = async (): Promise<Partial<FraudState>> => ({
decision:"step_up",
reason:["additional_verification_required"],
});

const manualReview = async (): Promise<Partial<FraudState>> => ({
decision:"review",
reason:["borderline_risk_sent_to_analyst"],
});

const decline = async (): Promise<Partial<FraudState>> => ({
decision:"decline",
reason:["high_fraud_risk"],
});

const graph = new StateGraph<FraudState>()
.addNode("enrichRisk", enrichRisk)
.addNode("scoreFraud", scoreFraud)
.addNode("decide", decide)
.addNode("approve", approve)
.addNode("stepUpAuth", stepUpAuth)
.addNode("manualReview", manualReview)
.addNode("decline", decline)
.addEdge(START,"enrichRisk")
.addEdge("enrichRisk","scoreFraud")
.addEdge("scoreFraud","decide")
.addConditionalEdges("decide", routeByRisk,{
approve:"approve",
step_up:"stepUpAuth",
review:"manualReview",
decline:"decline",
})
.addEdge("approve", END)
.addEdge("stepUpAuth", END)
.addEdge("manualReview", END)
.addEdge("decline", END);

const app = graph.compile();

const result = await app.invoke(initialState);
console.log(result.decision,result.fraudScore,result.reason);

###4) Make it auditable

For payments you need to explain every outcome later. Persist the final state plus the exact version of rules used.

  • Store transactionId, fraudScore, decision, reason, timestamps.
  • Store model/rule version separately so analysts can reproduce outcomes.
  • Keep raw PII out of logs; tokenize customer identifiers before persistence.
  • If you operate across regions, keep EU payment data in-region for residency requirements.

Production Considerations

  • Deploy the graph as a stateless service

    Keep the decision engine stateless and push state to your database or queue. That makes retries safe when payment processors timeout.

  • Add deterministic guardrails before any ML/LLM step

    For sanctions hits, blocked cards, or impossible geographies, short-circuit immediately. In payments you do not want a probabilistic model overriding hard compliance rules.

  • Monitor false positives by segment

    Track approval rate, fraud capture rate, manual review rate, and false declines by merchant category code, country pair, and payment method.

  • Treat audit logs as regulated data

    Encrypt at rest and in transit. Limit access by role because fraud traces often contain PAN-adjacent data, device fingerprints, and behavioral signals.

Common Pitfalls

  1. Using only a single fraud threshold

    • A flat threshold creates bad tradeoffs across merchants and geographies.
    • Fix it with per-segment thresholds based on risk appetite and historical loss rates.
  2. Letting the graph call external services without timeouts

    • Payment authorization has tight latency budgets.
    • Wrap enrichment calls with strict timeouts and fallback behavior so you can still make a decision under partial data.
  3. Skipping explainability fields

    • If you only store the final decision, analysts cannot defend declines or tune policies later.
    • Always persist the score breakdown and routing reason list alongside the transaction record.

A good fraud agent does not just block bad transactions. It gives you fast decisions at auth time, clean auditability for compliance teams, and enough signal quality to keep approval rates high without opening the door to abuse.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides