How to Build a transaction monitoring Agent Using LangGraph in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlanggraphpythonwealth-management

A transaction monitoring agent for wealth management watches client activity, scores it against policy and regulatory rules, and escalates suspicious cases with enough context for an analyst to act fast. It matters because wealth firms deal with high-value transfers, cross-border activity, and complex account structures, so false negatives create compliance risk and false positives waste analyst time.

Architecture

  • Transaction intake layer

    • Pulls events from your OMS, custodian feeds, or payment rails.
    • Normalizes fields like client_id, amount, currency, jurisdiction, counterparty, and timestamp.
  • Policy/rules evaluator

    • Checks deterministic controls first: thresholds, velocity limits, sanctioned jurisdictions, unusual counterparties, and account profile mismatches.
    • Keeps the first pass explainable for audit.
  • Risk enrichment node

    • Adds client context: AUM band, PEP status, source-of-wealth flags, residency, KYC refresh age, and historical behavior.
    • This is where wealth management differs from generic AML monitoring.
  • LLM reasoning node

    • Summarizes why a transaction is unusual in plain language.
    • Produces a structured recommendation: clear, review, or escalate.
  • Case output / audit trail

    • Stores the decision path, inputs used, and rationale.
    • Supports model governance and regulator review.

Implementation

1. Define the state and the decision schema

Use a typed state so every node in the graph passes around the same shape. Keep raw transaction data separate from derived risk fields so you can audit what changed.

from typing import TypedDict, Literal, Optional
from langgraph.graph import StateGraph, START, END

class TxnState(TypedDict):
    transaction_id: str
    client_id: str
    amount: float
    currency: str
    jurisdiction: str
    counterparty_country: str
    kyc_age_days: int
    pep_flag: bool
    aum_usd: float
    rule_score: int
    risk_level: Literal["low", "medium", "high"]
    explanation: Optional[str]
    disposition: Optional[Literal["clear", "review", "escalate"]]

2. Add deterministic policy nodes first

In wealth management, rules should run before any model call. That gives you a clean compliance layer and reduces LLM usage on obvious cases.

def rule_check(state: TxnState) -> dict:
    score = 0

    if state["amount"] >= 250000:
        score += 40
    if state["jurisdiction"] in {"IR", "KP", "SY"}:
        score += 50
    if state["counterparty_country"] != state["jurisdiction"]:
        score += 10
    if state["pep_flag"]:
        score += 15
    if state["kyc_age_days"] > 365:
        score += 10

    if score >= 70:
        risk = "high"
        disposition = "escalate"
    elif score >= 30:
        risk = "medium"
        disposition = "review"
    else:
        risk = "low"
        disposition = "clear"

    return {"rule_score": score, "risk_level": risk, "disposition": disposition}

3. Enrich the case and route only when needed

LangGraph’s add_conditional_edges is the right pattern here. Only send medium/high-risk transactions to an LLM summarizer.

def enrich_context(state: TxnState) -> dict:
    # Replace this with real lookups from CRM/KYC systems.
    source_of_wealth_risk = "elevated" if state["aum_usd"] > 5000000 else "standard"

    return {
        "explanation": (
            f"Client AUM ${state['aum_usd']:,.0f}, "
            f"PEP={state['pep_flag']}, "
            f"KYC age={state['kyc_age_days']} days, "
            f"SOW profile={source_of_wealth_risk}"
        )
    }

def route_after_rules(state: TxnState) -> str:
    return "enrich_context" if state["risk_level"] in {"medium", "high"} else END

graph = StateGraph(TxnState)
graph.add_node("rule_check", rule_check)
graph.add_node("enrich_context", enrich_context)
graph.add_edge(START, "rule_check")
graph.add_conditional_edges("rule_check", route_after_rules)
graph.add_edge("enrich_context", END)

app = graph.compile()

4. Add an LLM review step for analyst-ready summaries

For production you can plug in a chat model node using LangGraph’s standard callable pattern. The model should not make the final compliance decision; it should produce a structured rationale for analysts.

from langchain_core.messages import SystemMessage, HumanMessage

def llm_review(state: TxnState) -> dict:
    prompt = [
        SystemMessage(content=(
            "You are a transaction monitoring assistant for wealth management. "
            "Explain why this transaction is unusual using only the provided facts. "
            "Return concise text suitable for an audit case note."
        )),
        HumanMessage(content=(
            f"Transaction amount: {state['amount']} {state['currency']}\n"
            f"Jurisdiction: {state['jurisdiction']}\n"
            f"Counterparty country: {state['counterparty_country']}\n"
            f"Risk level from rules: {state['risk_level']}\n"
            f"Context: {state.get('explanation', '')}"
        )),
    ]

    # Replace with your actual model call.
    summary = (
        f"High-risk transfer flagged due to rule score {state['rule_score']}. "
        f"{state.get('explanation', '')}"
    )

    return {
        "explanation": summary,
        "disposition": "escalate" if state["risk_level"] == "high" else "review",
    }

If you want a single graph that includes this step:

def route_for_llm(state: TxnState) -> str:
    return "enrich_context" if state["risk_level"] in {"medium", "high"} else END

graph = StateGraph(TxnState)
graph.add_node("rule_check", rule_check)
graph.add_node("enrich_context", enrich_context)
graph.add_node("llm_review", llm_review)

graph.add_edge(START, "rule_check")
graph.add_conditional_edges("rule_check", route_for_llm)
graph.add_edge("enrich_context", "llm_review")
graph.add_edge("llm_review", END)

app = graph.compile()

Run it like this:

result = app.invoke({
    "transaction_id": "txn_1001",
    "client_id": "cli_42",
    "amount": 350000,
    "currency": "USD",
    "jurisdiction": "US",
    "counterparty_country": "AE",
    "kyc_age_days": 420,
    "pep_flag": True,
    "aum_usd": 12000000,
})
print(result)

Production Considerations

  • Auditability

    • Persist every node input/output with timestamps and versioned policy logic.
    • Regulators will ask why a case was escalated; your graph trace should answer that without reconstruction work.
  • Data residency

    • Keep client PII and transaction data in-region where required.
    • If your LLM endpoint crosses borders, tokenize or redact sensitive fields before invocation.
  • Guardrails

    • Never let the LLM override hard compliance rules.
    • Use it for narrative summaries and triage support only; final disposition should remain deterministic or analyst-approved.
  • Monitoring

    • Track false positive rate by segment: HNW clients, trusts, offshore structures, cross-border wires.
    • Watch drift in rule hit rates after policy changes or market events.

Common Pitfalls

  • Letting the model decide escalation

    This is a compliance bug. Use rules to set clear, review, or escalate, then let the LLM explain the result.

  • Mixing raw PII into prompts

    Don’t send full account numbers, addresses, or unnecessary identifiers to the model. Redact early and keep prompts minimal.

  • Using one threshold for all clients

    Wealth management needs segmentation. A $250k wire means something different for an HNW family office than it does for a retail-style advisory account.

  • Skipping versioning on policies

    If thresholds change and you don’t version them, your audit trail becomes useless. Store policy version alongside each case result.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides