How to Build a fraud detection Agent Using LangGraph in Python for payments
A fraud detection agent for payments takes a transaction, enriches it with risk signals, scores it, and decides whether to approve, step up, or block the payment. The point is not just classification; it’s making a decision fast enough for checkout flows while keeping a full audit trail for compliance, dispute handling, and model governance.
Architecture
- •
Transaction intake
- •Accepts payment events from API, webhook, or stream.
- •Normalizes fields like amount, currency, merchant, card token, IP, device ID, and customer ID.
- •
Risk enrichment
- •Pulls velocity checks, account age, geolocation mismatch, BIN country, device reputation, and merchant history.
- •Keeps this logic outside the model so you can swap providers without rewriting the graph.
- •
Policy engine
- •Applies hard rules before or after scoring.
- •Examples: block sanctioned geographies, require step-up for high-risk MCCs, reject impossible travel patterns.
- •
LLM-based reasoning node
- •Summarizes the case and explains why a transaction is suspicious.
- •Useful for analyst review and case notes, not as the only decision-maker.
- •
Decision router
- •Routes to
approve,review, orblock. - •Must be deterministic and easy to audit.
- •Routes to
- •
Audit logger
- •Persists inputs, intermediate signals, final decision, and explanation.
- •Required for payments compliance and incident review.
Implementation
1) Define state and nodes
Use a typed state object so every node reads and writes the same schema. In payments work, that schema should include raw transaction data, enriched signals, risk score, decision, and explanation.
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
import operator
class FraudState(TypedDict):
transaction_id: str
amount: float
currency: str
customer_id: str
merchant_id: str
ip_country: str
billing_country: str
device_trust_score: float
velocity_1h: int
risk_score: float
decision: str
explanation: str
audit_log: Annotated[list[str], operator.add]
def enrich_risk(state: FraudState) -> dict:
mismatch = state["ip_country"] != state["billing_country"]
score = 0.2
if mismatch:
score += 0.25
if state["velocity_1h"] > 5:
score += 0.3
if state["device_trust_score"] < 0.4:
score += 0.25
return {
"risk_score": min(score, 1.0),
"audit_log": [f"enriched risk for {state['transaction_id']}"]
}
def decide(state: FraudState) -> dict:
score = state["risk_score"]
if score >= 0.8:
decision = "block"
elif score >= 0.5:
decision = "review"
else:
decision = "approve"
return {
"decision": decision,
"audit_log": [f"decision={decision} score={score}"]
}
2) Add a policy node with explicit payment controls
This is where you enforce hard rules that should never be delegated to an LLM. Keep compliance checks deterministic.
def policy_check(state: FraudState) -> dict:
blocked_countries = {"IR", "KP"}
if state["ip_country"] in blocked_countries:
return {
"decision": "block",
"explanation": "Blocked by country policy",
"audit_log": [f"policy block on {state['ip_country']}"]
}
return {"audit_log": ["policy passed"]}
3) Build the LangGraph workflow
Use StateGraph, add nodes with add_node, connect them with add_edge, then compile with compile(). This pattern keeps the graph readable when you add more checks later.
def explain_case(state: FraudState) -> dict:
text = (
f"Transaction {state['transaction_id']} scored {state['risk_score']:.2f}. "
f"IP country={state['ip_country']}, billing country={state['billing_country']}, "
f"velocity_1h={state['velocity_1h']}, device_trust_score={state['device_trust_score']}."
)
return {"explanation": text, "audit_log": ["generated analyst explanation"]}
graph = StateGraph(FraudState)
graph.add_node("policy_check", policy_check)
graph.add_node("enrich_risk", enrich_risk)
graph.add_node("decide", decide)
graph.add_node("explain_case", explain_case)
graph.add_edge(START, "policy_check")
graph.add_edge("policy_check", "enrich_risk")
graph.add_edge("enrich_risk", "decide")
graph.add_edge("decide", "explain_case")
graph.add_edge("explain_case", END)
fraud_agent = graph.compile()
4) Run the agent on a payment event
For production systems you’d call this from your payment authorization service or fraud microservice. The output is a structured decision you can feed into your gateway or case management system.
result = fraud_agent.invoke({
"transaction_id": "txn_123",
"amount": 249.99,
"currency": "USD",
"customer_id": "cus_456",
"merchant_id": "m_789",
"ip_country": "US",
"billing_country": "US",
"device_trust_score": 0.32,
"velocity_1h": 7,
"risk_score": 0.0,
"decision": "",
"explanation": "",
"audit_log": []
})
print(result["decision"])
print(result["explanation"])
print(result["audit_log"])
Production Considerations
- •
Keep hard blocks outside model logic
- •Sanctions screening, geography restrictions, and PCI-related controls should be deterministic.
- •Use the graph to orchestrate them; do not ask an LLM to “decide” on compliance.
- •
Persist full audit trails
- •Store input payloads, node outputs, timestamps, and final decisions.
- •Payments teams need this for chargeback disputes, regulator requests, and internal reviews.
- •
Control data residency
- •If transaction data must stay in-region, keep enrichment services and model calls inside that boundary.
- •Redact PANs and sensitive identifiers before any external LLM call.
- •
Monitor false positives by segment
- •Track approval rate by merchant category code, geography, issuer BIN range, and customer cohort.
- •A fraud model that blocks legitimate cross-border commerce will create direct revenue loss.
Common Pitfalls
- •
Using the LLM as the primary fraud scorer
- •Don’t do this.
- •Use deterministic features and thresholds for scoring; use the LLM only for explanation or analyst assist.
- •
Skipping idempotency on payment events
- •Retries happen.
- •Key every run by
transaction_idso repeated webhooks don’t generate duplicate reviews or conflicting decisions.
- •
Ignoring latency budgets
- •Checkout authorization has tight timing constraints.
- •Keep synchronous paths short; push deep enrichment or human review into async follow-up when possible.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit