How to Build a transaction monitoring Agent Using LangGraph in Python for fintech

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlanggraphpythonfintech

A transaction monitoring agent watches payment events, scores them against policy and behavioral rules, and decides whether to approve, flag, or escalate for review. In fintech, that matters because you need fast decisions without losing auditability, compliance traceability, or control over false positives.

Architecture

  • Event intake layer

    • Receives transactions from Kafka, HTTP webhooks, or a batch queue.
    • Normalizes fields like amount, currency, merchant_country, customer_id, and device_id.
  • Risk feature builder

    • Enriches the transaction with customer history, velocity metrics, geo signals, and account metadata.
    • Keeps enrichment deterministic so the same input produces the same decision trail.
  • Policy and rule node

    • Applies hard rules first: sanctions match, threshold breaches, high-risk corridor checks.
    • This is where compliance logic stays explicit and auditable.
  • LLM analysis node

    • Summarizes why a transaction looks suspicious and suggests next actions.
    • Use it for reasoning support, not as the final authority.
  • Decision router

    • Routes to approve, hold_for_review, or escalate.
    • Produces structured outputs that downstream systems can consume.
  • Audit sink

    • Persists input features, model output, rule hits, and final decision.
    • Required for investigations, model governance, and regulator review.

Implementation

1) Define the state and the graph nodes

Use LangGraph’s StateGraph with a typed state object. Keep the state small and explicit; don’t pass raw blobs around unless you want debugging pain later.

from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI

Decision = Literal["approve", "hold_for_review", "escalate"]

class TxState(TypedDict):
    transaction: dict
    features: dict
    risk_score: float
    rule_hits: list[str]
    llm_reasoning: str
    decision: Decision

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def build_features(state: TxState) -> TxState:
    tx = state["transaction"]
    features = {
        "is_high_value": tx["amount"] >= 10000,
        "cross_border": tx["merchant_country"] != tx["customer_country"],
        "hour_of_day": tx["timestamp"].hour,
    }
    return {**state, "features": features}

def apply_rules(state: TxState) -> TxState:
    hits = []
    f = state["features"]
    if f["is_high_value"]:
        hits.append("high_value_threshold")
    if f["cross_border"]:
        hits.append("cross_border_payment")
    risk_score = min(1.0, len(hits) * 0.35)
    return {**state, "rule_hits": hits, "risk_score": risk_score}

def llm_review(state: TxState) -> TxState:
    tx = state["transaction"]
    prompt = f"""
You are a transaction monitoring analyst.
Transaction: {tx}
Rule hits: {state['rule_hits']}
Risk score: {state['risk_score']}

Return a short explanation for a compliance analyst.
"""
    response = llm.invoke(prompt)
    return {**state, "llm_reasoning": response.content}

2) Add routing logic with add_conditional_edges

This is the core pattern. Rules decide first; the LLM only adds context. That keeps the system defensible when compliance asks why a payment was blocked.

def route_decision(state: TxState) -> Decision:
    if "sanctions_match" in state["rule_hits"]:
        return "escalate"
    if state["risk_score"] >= 0.7:
        return "hold_for_review"
    return "approve"

def set_approve(state: TxState) -> TxState:
    return {**state, "decision": "approve"}

def set_review(state: TxState) -> TxState:
    return {**state, "decision": "hold_for_review"}

def set_escalate(state: TxState) -> TxState:
    return {**state, "decision": "escalate"}

graph = StateGraph(TxState)
graph.add_node("build_features", build_features)
graph.add_node("apply_rules", apply_rules)
graph.add_node("llm_review", llm_review)
graph.add_node("set_approve", set_approve)
graph.add_node("set_review", set_review)
graph.add_node("set_escalate", set_escalate)

graph.set_entry_point("build_features")
graph.add_edge("build_features", "apply_rules")
graph.add_edge("apply_rules", "llm_review")

graph.add_conditional_edges(
    "llm_review",
    route_decision,
    {
        "approve": "set_approve",
        "hold_for_review": "set_review",
        "escalate": "set_escalate",
    },
)

graph.add_edge("set_approve", END)
graph.add_edge("set_review", END)
graph.add_edge("set_escalate", END)

app = graph.compile()

3) Run the agent on a real transaction payload

Keep your input contract strict. In production I’d validate this at the API boundary before it ever reaches LangGraph.

from datetime import datetime

sample_tx = {
    "transaction_id": "tx_123",
    "amount": 12500,
    "currency": "USD",
    "merchant_country": "NG",
    "customer_country": "US",
    "customer_id": "cust_42",
    "timestamp": datetime.utcnow(),
}

result = app.invoke({
    "transaction": sample_tx,
    "features": {},
    "risk_score": 0.0,
    "rule_hits": [],
    # these will be filled by nodes
})
print(result["decision"])
print(result["rule_hits"])
print(result["llm_reasoning"])

4) Make the output audit-friendly

For fintech workflows, persist every decision with enough context to reconstruct it later. Store the original payload hash, rule hits, model version, prompt version, and final decision in an immutable log or case management system.

A practical schema looks like this:

FieldWhy it matters
transaction_idCorrelates alerts across systems
payload_hashProves input integrity
rule_hitsExplains deterministic triggers
risk_scoreSupports threshold tuning
llm_reasoningAnalyst context
decisionFinal action taken
model_versionGovernance and rollback
prompt_versionReproducibility

Production Considerations

  • Deploy with strict data residency controls

    • Keep customer PII in-region.
    • If your LLM endpoint crosses borders, tokenize or redact sensitive fields before invocation.
  • Log every graph run

    • Persist inputs, intermediate state transitions, and final outputs.
    • Use LangGraph’s execution trace plus your own immutable audit store for regulator-ready evidence.
  • Put hard guardrails before any model call

    • Sanctions screening, velocity checks, amount thresholds, and known fraud patterns should be deterministic.
    • The LLM should explain and summarize; it should not override policy controls.
  • Monitor false positives and analyst overrides

    • Track precision by merchant segment and corridor.
    • If analysts keep clearing one class of alerts, tune rules before touching prompts.

Common Pitfalls

  • Letting the LLM make the final compliance decision

    • Don’t do this.
    • Use rule-based routing for blocking/escalation paths and reserve the LLM for explanation or triage support.
  • Passing unstructured state through the graph

    • This makes audits painful and bugs hard to isolate.
    • Define a typed state schema and keep node inputs/outputs explicit.
  • Ignoring replayability

    • If you can’t reproduce a decision from stored inputs plus versioned logic, your audit trail is weak.
    • Version prompts, ruleset code, model name, and feature extraction logic together.
  • Sending raw PII to external models

    • Redact account numbers, names where possible, device identifiers if unnecessary.
    • In regulated environments this is a data governance issue first and an AI issue second.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides