How to Build a fraud detection Agent Using LangGraph in TypeScript for banking

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlanggraphtypescriptbanking

A fraud detection agent in banking takes a transaction, customer context, and risk signals, then decides whether to approve, step-up authenticate, hold for review, or block. It matters because fraud decisions need to be fast, explainable, auditable, and consistent under regulatory scrutiny.

Architecture

A production fraud agent built with LangGraph in TypeScript needs these components:

  • Input normalization node
    • Converts raw payment events, card-not-present activity, login events, or wire instructions into a single internal schema.
  • Feature enrichment node
    • Pulls customer history, device fingerprint, geo distance, velocity counts, beneficiary reputation, and account tenure from internal services.
  • Risk scoring node
    • Calls your rules engine or ML model and returns a structured risk object with reason codes.
  • Decision node
    • Applies bank policy: approve, challenge with MFA, place on hold, or escalate to analyst review.
  • Audit logging node
    • Writes every intermediate state and final decision to an immutable audit store.
  • Human review handoff
    • Routes borderline cases to a case management system with enough context for investigators.

Implementation

1) Define the graph state and typed outputs

Keep the state explicit. In banking systems, vague any objects turn into audit problems later.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";

type FraudDecision = "approve" | "step_up" | "hold" | "block";

type FraudState = {
  transactionId: string;
  customerId: string;
  amount: number;
  currency: string;
  merchantCountry?: string;
  ipCountry?: string;
  deviceId?: string;
  riskScore?: number;
  reasons?: string[];
  decision?: FraudDecision;
  auditTrail?: Array<{
    node: string;
    timestamp: string;
    payload: unknown;
  }>;
};

const FraudStateSchema = Annotation.Root({
  transactionId: Annotation<string>(),
  customerId: Annotation<string>(),
  amount: Annotation<number>(),
  currency: Annotation<string>(),
  merchantCountry: Annotation<string>().optional(),
  ipCountry: Annotation<string>().optional(),
  deviceId: Annotation<string>().optional(),
  riskScore: Annotation<number>().optional(),
  reasons: Annotation<string[]>().optional(),
  decision: Annotation<FraudDecision>().optional(),
  auditTrail: Annotation<
    Array<{ node: string; timestamp: string; payload: unknown }>
  >().default(() => []),
});

2) Add enrichment and scoring nodes

Use deterministic service calls first. If you add an LLM later for explanation generation, keep it out of the core decision path.

const enrichNode = async (state: typeof FraudStateSchema.State) => {
  const velocityCount = await getVelocityCount(state.customerId);
  const accountAgeDays = await getAccountAgeDays(state.customerId);

  return {
    riskScore:
      (state.amount > 5000 ? 35 : 0) +
      (state.ipCountry && state.merchantCountry && state.ipCountry !== state.merchantCountry ? 25 : 0) +
      (velocityCount > 5 ? 30 : 0) +
      (accountAgeDays < 30 ? 20 : 0),
    reasons: [
      ...(state.amount > 5000 ? ["high_amount"] : []),
      ...(state.ipCountry && state.merchantCountry && state.ipCountry !== state.merchantCountry
        ? ["geo_mismatch"]
        : []),
      ...(velocityCount > 5 ? ["high_velocity"] : []),
      ...(accountAgeDays < 30 ? ["new_account"] : []),
    ],
    auditTrail: [
      ...state.auditTrail,
      {
        node: "enrichNode",
        timestamp: new Date().toISOString(),
        payload: { velocityCount, accountAgeDays },
      },
    ],
  };
};

const decideNode = async (state: typeof FraudStateSchema.State) => {
    const score = state.riskScore ?? 0;

    let decision: FraudDecision = "approve";
    if (score >= 80) decision = "block";
    else if (score >= 60) decision = "hold";
    else if (score >= 40) decision = "step_up";

    return {
      decision,
      auditTrail: [
        ...state.auditTrail,
        {
          node: "decideNode",
          timestamp: new Date().toISOString(),
          payload: { score, decision, reasons: state.reasons ?? [] },
        },
      ],
    };
};

3) Wire the workflow with conditional routing

This is where LangGraph earns its keep. You can branch cleanly without turning the codebase into nested if-statements.

const routeByDecision = (state: typeof FraudStateSchema.State) => {
const score = state.riskScore ?? undefined;

if ((score ?? -1) >= threshold.block) return "block";
if ((score ?? -1) >= threshold.hold) return "hold";
if ((score ?? -1) >= threshold.stepUp) return "step_up";
return END;
};

const graph = new StateGraph(FraudStateSchema)
.addNode("enrich", enrichNode)
.addNode("decide", decideNode)
.addEdge(START, "enrich")
.addEdge("enrich", "decide")
.addConditionalEdges("decide", routeByDecision, {
block:
END,
hold:
END,
step_up:
END,
});

export const fraudApp = graph.compile();

Corrected full routing pattern

The snippet above shows the structure. In practice you want explicit handlers for each branch so downstream systems receive the right event.

import { Annotation, StateGraph, START, END } from "@langchain/langgraph";

const threshold = {
stepUp:
40,
hold:
60,
block:
80,
};

type FraudDecision =
| "approve"
| "step_up"
| "hold"
| "block";

const routeByDecision = (state:
typeof FraudStateSchema.State) => {
const score =
state.riskScore ??
0;

if (
score >= threshold.block
)
return
"block";
if (
score >= threshold.hold
)
return
"hold";
if (
score >= threshold.stepUp
)
return
"step_up";
return
"approve";
};

const graph =
new StateGraph(
FraudStateSchema
)
.addNode(
"enrich",
enrichNode
)
.addNode(
"decide",
decideNode
)
.addNode(
"audit",
async (state) => ({
auditTrail:
[
...state.auditTrail,
{
node:
"audit",
timestamp:
new Date().toISOString(),
payload:
{
decision:
state.decision,
riskScore:
state.riskScore,
},
},
],
})
)
.addEdge(
START,
"enrich"
)
.addEdge(
"enrich",
"decide"
)
.addConditionalEdges(
"decide",
routeByDecision,
{
approve:
"audit",
step_up:
"audit",
hold:
"audit",
block:
"audit",
}
)
.addEdge(
"audit",
END
);

export const fraudApp =
graph.compile();

The helper services

async function getVelocityCount(customerId:string): Promise<number> {
return customerId.endsWith("9") ? 
8 :
2;
}

async function getAccountAgeDays(customerId:string): Promise<number> {
return customerId.startsWith("new") ?
12 :
420;
}

Run it

const result = await fraudApp.invoke({
transactionId:"txn_123",
customerId:"new_cust_99",
amount:
7500,
currency:"USD",
merchantCountry:"US",
ipCountry:"NG",
deviceId:"device_abc"
});

console.log(result.decision);
console.log(result.auditTrail);

Production Considerations

  • Keep PII out of prompts and logs

    If you use an LLM for analyst summaries or explanations, redact account numbers, full PANs, national IDs, and addresses before invocation. Store only masked values in application logs.

  • Design for auditability

    Every branch should emit reason codes and timestamps. Regulators will ask why a transaction was blocked; “the model said so” is not acceptable.

  • Respect data residency

    Route customer data through region-bound infrastructure. If your bank requires EU-only processing or local jurisdiction storage, do not send enriched features to external endpoints outside that boundary.

  • Add human-in-the-loop thresholds

    Auto-block only at high confidence. For mid-risk cases, create a case record and require analyst approval before final action.

Common Pitfalls

  1. Using the LLM as the primary scorer

    Don’t let a generative model decide fraud outcomes directly. Use deterministic rules or a validated risk model for the actual decision path, then optionally use an LLM to summarize why the case was escalated.

  2. Skipping immutable audit trails

    If you only store the final outcome, you lose traceability on intermediate decisions. Persist node-level inputs and outputs so compliance can reconstruct the full path later.

  3. Mixing policy logic with transport code

    Keep policy thresholds inside dedicated functions or config objects. If routing logic is buried inside API handlers or queue consumers, changing a rule turns into a deployment problem instead of a policy update.

  4. Ignoring latency budgets

    Fraud checks sit on critical payment paths. Put strict timeouts on enrichment calls and fail closed or step-up when upstream services are slow rather than letting checkout hang indefinitely.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides