How to Build a transaction monitoring Agent Using LangGraph in TypeScript for investment banking

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlanggraphtypescriptinvestment-banking

A transaction monitoring agent ingests trade and payment events, scores them against surveillance rules and behavioral patterns, then routes suspicious activity to the right escalation path. In investment banking, that matters because you need to catch market abuse, AML issues, sanctions exposure, and control breaches fast enough to reduce regulatory risk without drowning compliance teams in false positives.

Architecture

  • Event intake layer

    • Pulls transactions from Kafka, Kinesis, a database CDC stream, or a batch file feed.
    • Normalizes trade, cash movement, counterparty, desk, and timestamp fields into one schema.
  • Rule evaluation node

    • Applies deterministic checks first: threshold breaches, velocity rules, restricted list hits, unusual counterparties.
    • Keeps the first pass explainable for audit and model governance.
  • Risk scoring node

    • Adds contextual scoring from historical behavior: client profile, desk patterns, geography, instrument type.
    • Produces a severity score and rationale.
  • Case decision node

    • Decides whether to clear, enrich, escalate, or hold for review.
    • Uses LangGraph branching so the path is explicit and traceable.
  • Case management sink

    • Writes alerts into a case management system or queue for compliance analysts.
    • Persists the full decision trail for audit.
  • Audit and observability layer

    • Captures every state transition, rule hit, score input, and final action.
    • Required for model risk management, internal audit, and regulator review.

Implementation

1. Define the graph state and transaction schema

Use a typed state object so every node reads and writes predictable fields. For banking workflows, keep both the raw event and the derived decision trail.

import { Annotation } from "@langchain/langgraph";

type Transaction = {
  id: string;
  accountId: string;
  counterparty: string;
  amount: number;
  currency: string;
  country: string;
  desk: string;
  timestamp: string;
};

type Alert = {
  severity: "low" | "medium" | "high";
  reasons: string[];
};

const MonitoringState = Annotation.Root({
  tx: Annotation<Transaction>(),
  alert: Annotation<Alert>(),
  action: Annotation<string>(),
});

2. Add deterministic surveillance checks

Start with rules before any LLM or heuristic layer. This keeps the system defensible for compliance teams and easier to validate during model governance reviews.

import { StateGraph, START, END } from "@langchain/langgraph";

const ruleCheck = async (state: typeof MonitoringState.State) => {
  const reasons: string[] = [];
  let severity: "low" | "medium" | "high" = "low";

  if (state.tx.amount >= 1000000) {
    reasons.push("Amount exceeds $1M threshold");
    severity = "medium";
  }

  if (["IR", "KP", "RU"].includes(state.tx.country)) {
    reasons.push(`Counterparty country flagged: ${state.tx.country}`);
    severity = "high";
  }

  if (state.tx.desk === "equities" && state.tx.amount >= 5000000) {
    reasons.push("Large equities transaction");
    severity = severity === "high" ? "high" : "medium";
  }

  return {
    alert: { severity, reasons },
    action: reasons.length > 0 ? "enrich" : "clear",
  };
};

3. Enrich only when needed and branch explicitly

LangGraph works well here because you can route based on state instead of burying logic inside one large function. That makes the workflow easier to test and easier to explain to auditors.

const enrichTransaction = async (state: typeof MonitoringState.State) => {
  // Replace this with real enrichment from internal risk systems.
  const historicalRiskScore =
    state.tx.counterparty.toLowerCase().includes("shell") ? 85 : 20;

   const elevated =
    historicalRiskScore > 70 || state.alert.severity === "high";

   return {
    alert: {
      ...state.alert,
      reasons: [
        ...state.alert.reasons,
        `Historical risk score: ${historicalRiskScore}`,
      ],
      severity: elevated ? "high" : state.alert.severity,
    },
    action: elevated ? "escalate" : "clear",
   };
};

const route = (state: typeof MonitoringState.State) => {
   return state.action === "enrich" ? "enrichTransaction" : END;
};

4. Compile the graph and run it against a transaction

This is the actual LangGraph pattern you want in production services. The graph stays small enough to test with fixtures while still supporting branching and escalation paths.

const graph = new StateGraph(MonitoringState)
   .addNode("ruleCheck", ruleCheck)
   .addNode("enrichTransaction", enrichTransaction)
   .addEdge(START, "ruleCheck")
   .addConditionalEdges("ruleCheck", route)
   .addEdge("enrichTransaction", END)
   .compile();

async function monitor(tx: Transaction) {
   const result = await graph.invoke({
      tx,
      alert: { severity: "low", reasons: [] },
      action: "",
   });

   return result;
}

const output = await monitor({
   id: "tx_001",
   accountId: "acct_789",
   counterparty: "Example Holdings Ltd",
   amount: 2500000,
   currency: "USD",
   country: "GB",
   desk: "equities",
   timestamp: new Date().toISOString(),
});

console.log(output);

Production Considerations

  • Keep data residency explicit

    • If your booking data must stay in-region, deploy the graph runtime in the same jurisdiction as the source systems.
    • Do not send raw transaction payloads to external APIs unless legal/compliance has approved it.
  • Log every decision path

    • Persist input transaction ID, rule hits, enrichment inputs, final action, and timestamps.
    • This gives you audit evidence for internal review and regulator requests.
  • Separate rules from models

    • Deterministic rules should live in versioned code with approval workflows.
    • If you add an LLM for narrative summaries or analyst assistance, keep it off the primary decision path.
  • Build escalation guardrails

    • Route high-severity cases to human review automatically.
    • Add hard stops for sanctions-related flags so nothing is auto-cleared downstream.

Common Pitfalls

  • Putting all logic in one node

    This makes testing painful and hides decision boundaries. Split intake, rules, enrichment, and escalation into separate nodes so each one has a clear contract.

  • Using an LLM as the first filter

This is a bad fit for regulated monitoring because it weakens explainability. Use deterministic checks first; reserve the model for summarization or triage support.

  • Ignoring replayability

If you cannot replay a transaction through the same graph version later, your audit story breaks. Version your ruleset, persist graph inputs/outputs, and store enough context to reproduce decisions exactly.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides