How to Build a transaction monitoring Agent Using LangGraph in Python for payments
A transaction monitoring agent watches payment events, scores them for risk, and decides whether to approve, hold, or escalate them for review. For payments teams, this matters because you need fast decisions without losing control over fraud, AML, sanctions, and audit requirements.
Architecture
- •
Transaction ingest layer
- •Receives payment events from Kafka, HTTP webhooks, or batch jobs.
- •Normalizes fields like
amount,currency,country,merchant_id, andcustomer_id.
- •
Risk enrichment node
- •Pulls customer profile, device history, velocity counters, sanctions hits, and merchant risk.
- •Produces a single structured context for downstream decisions.
- •
Policy evaluation node
- •Applies deterministic rules first: thresholds, country blocks, amount limits, repeated failures.
- •Keeps the system explainable and compliant.
- •
LLM reasoning node
- •Handles ambiguous cases where rules are not enough.
- •Generates a short explanation and recommended action, not a free-form decision.
- •
Decision router
- •Routes to
approve,hold, orescalate. - •Ensures high-risk cases go to human review or a case management system.
- •Routes to
- •
Audit/logging sink
- •Stores every input, intermediate state, and final decision.
- •Required for investigations, model governance, and regulator requests.
Implementation
1) Define the state and decision schema
Use a typed state so every node in the graph knows what it can read and write. For payments, keep the state compact and auditable.
from typing import TypedDict, Literal, Optional
from langgraph.graph import StateGraph, START, END
from langchain_core.runnables import RunnableLambda
Decision = Literal["approve", "hold", "escalate"]
class TxnState(TypedDict):
transaction_id: str
amount: float
currency: str
country: str
merchant_id: str
customer_id: str
risk_score: int
policy_flag: bool
llm_reasoning: str
decision: Decision
2) Add deterministic payment controls first
This is where you encode obvious compliance and fraud checks. Do not let the LLM override hard rules for sanctions or blocked corridors.
def enrich_transaction(state: TxnState) -> dict:
# Replace with real lookups from your risk service / feature store.
high_risk_countries = {"NG", "IR", "KP"}
policy_flag = (
state["amount"] > 5000
or state["country"] in high_risk_countries
or state["merchant_id"].startswith("mcc_7995")
)
risk_score = 90 if policy_flag else 25
return {
"risk_score": risk_score,
"policy_flag": policy_flag,
}
def apply_policy(state: TxnState) -> dict:
if state["policy_flag"]:
return {"decision": "hold"}
if state["risk_score"] >= 80:
return {"decision": "escalate"}
return {"decision": "approve"}
3) Add an LLM node only for borderline cases
LangGraph works well when the graph routes only uncertain cases to the model. Use RunnableLambda for a local callable today; swap it with a chat model later through LangChain’s standard interface.
def llm_review(state: TxnState) -> dict:
# In production this would call a chat model with strict prompting.
reasoning = (
f"Transaction {state['transaction_id']} is borderline. "
f"Amount={state['amount']}, country={state['country']}, "
f"merchant={state['merchant_id']}. Recommend human review."
)
return {
"llm_reasoning": reasoning,
"decision": "escalate",
}
4) Build the LangGraph workflow with conditional routing
This is the actual pattern you want in production: deterministic checks first, then conditional routing into LLM review only when needed.
def route_after_policy(state: TxnState) -> str:
if state["decision"] == "hold":
return END
if state["decision"] == "escalate":
return "llm_review"
return END
graph = StateGraph(TxnState)
graph.add_node("enrich_transaction", RunnableLambda(enrich_transaction))
graph.add_node("apply_policy", RunnableLambda(apply_policy))
graph.add_node("llm_review", RunnableLambda(llm_review))
graph.add_edge(START, "enrich_transaction")
graph.add_edge("enrich_transaction", "apply_policy")
graph.add_conditional_edges(
"apply_policy",
route_after_policy,
{
"llm_review": "llm_review",
END: END,
},
)
graph.add_edge("llm_review", END)
app = graph.compile()
result = app.invoke(
{
"transaction_id": "txn_123",
"amount": 7200.0,
"currency": "USD",
"country": "GB",
"merchant_id": "mcc_5411_shop_88",
"customer_id": "cus_456",
"risk_score": 0,
"policy_flag": False,
"llm_reasoning": "",
"decision": "approve",
}
)
print(result)
Production Considerations
- •
Keep hard compliance rules outside the model
- •Sanctions screening, blocked countries, KYC/AML thresholds, and data residency constraints should be deterministic.
- •The agent can explain or escalate; it should not invent policy.
- •
Persist full audit traces
- •Store input payloads, node outputs, routing decisions, prompt versions, and model responses.
- •Regulators will ask why a payment was held or approved.
- •
Respect data residency
- •If transaction data must stay in-region, keep both feature retrieval and inference inside that region.
- •Avoid shipping PII to external APIs unless your legal/compliance team has signed off.
- •
Monitor decision drift
- •Track hold rate, escalation rate, false positives from ops review, and time-to-decision.
- •A spike in escalations usually means either bad upstream features or overly aggressive rules.
Common Pitfalls
- •
Letting the LLM make final compliance decisions
- •Bad pattern: “model says approve” even when sanctions logic should block the payment.
- •Fix: put non-negotiable controls in deterministic nodes before any model call.
- •
Using untyped or oversized graph state
- •Bad pattern: stuffing raw logs and customer history into one giant dict.
- •Fix: keep state small with only fields needed for routing and audit references.
- •
Skipping replayability
- •Bad pattern: no versioning of prompts, policies, or feature snapshots.
- •Fix: version every rule set and model prompt so you can replay exactly why a transaction was held.
- •
Routing too many transactions into LLM review
- •Bad pattern: using the model on every payment event.
- •Fix: reserve LLM calls for borderline cases; most payments should be handled by policy fast paths.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit