How to Build a compliance checking Agent Using LangGraph in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphpythonwealth-management

A compliance checking agent in wealth management reviews client communications, investment proposals, and account actions against policy rules before anything is sent or executed. It matters because the cost of a bad recommendation is not just a support ticket; it can trigger regulatory exposure, client harm, and audit findings.

Architecture

Build this agent as a small state machine, not a monolith.

  • Input normalizer
    • Converts raw advisor notes, emails, chat messages, or portfolio changes into a structured payload.
  • Policy retrieval layer
    • Pulls the relevant compliance rules for jurisdiction, product type, client classification, and channel.
  • Compliance evaluator
    • Checks the request against firm policy, suitability constraints, restricted lists, and disclosure requirements.
  • Escalation router
    • Decides whether to approve, block, or route to human compliance review.
  • Audit logger
    • Stores the full decision trail: input, matched policies, model output, final action, timestamp, and reviewer identity if escalated.
  • Execution guard
    • Prevents downstream systems from acting until the agent returns an explicit allow decision.

Implementation

1) Define the state and decision schema

Use TypedDict for graph state and a Pydantic model for structured decisions. In wealth management, you want deterministic outputs because audit teams will ask what was checked and why.

from typing import TypedDict, Annotated
from pydantic import BaseModel, Field
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class ComplianceDecision(BaseModel):
    status: str = Field(description="approve | reject | escalate")
    reason: str
    rule_ids: list[str] = []

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    client_profile: dict
    request: dict
    policies: list[dict]
    decision: dict
    audit_log: list[dict]

2) Add nodes for policy lookup and evaluation

This example uses plain Python logic so you can wire it to your own policy store later. In production you would replace get_policies() with a database query or retrieval call keyed by jurisdiction and product.

def get_policies(state: AgentState):
    request = state["request"]
    jurisdiction = request.get("jurisdiction", "US")
    product = request.get("product_type", "equity")

    policies = [
        {"id": "KYC-001", "rule": "client_must_be_verified"},
        {"id": "SUIT-010", "rule": "no_high_risk_product_for_conservative_client"},
        {"id": "DISC-002", "rule": "must_disclose_conflicts"},
    ]

    return {"policies": policies}

def evaluate_compliance(state: AgentState):
    profile = state["client_profile"]
    request = state["request"]
    policies = state["policies"]

    violated = []

    if not profile.get("kyc_verified", False):
        violated.append("KYC-001")

    if profile.get("risk_tolerance") == "conservative" and request.get("product_type") in {"options", "crypto"}:
        violated.append("SUIT-010")

    if request.get("conflict_disclosure_required") and not request.get("conflict_disclosed"):
        violated.append("DISC-002")

    if violated:
        decision = ComplianceDecision(
            status="reject" if "KYC-001" in violated else "escalate",
            reason="Policy violation detected",
            rule_ids=violated,
        )
    else:
        decision = ComplianceDecision(
            status="approve",
            reason="No policy violations detected",
            rule_ids=[],
        )

    return {"decision": decision.model_dump()}

3) Add routing and audit logging

This is where LangGraph earns its keep. The graph makes approval paths explicit and keeps rejection/escalation separate from execution.

def route_decision(state: AgentState):
    status = state["decision"]["status"]
    if status == "approve":
        return "approved"
    if status == "escalate":
        return "human_review"
    return "rejected"

def log_audit(state: AgentState):
    entry = {
        "client_id": state["client_profile"].get("client_id"),
        "request_id": state["request"].get("request_id"),
        "decision": state["decision"],
        "policies_checked": [p["id"] for p in state["policies"]],
    }
    return {"audit_log": state.get("audit_log", []) + [entry]}

def finalize_approval(state: AgentState):
    return {"audit_log": state.get("audit_log", []) + [{"event": "approved_for_execution"}]}

def finalize_rejection(state: AgentState):
    return {"audit_log": state.get("audit_log", []) + [{"event": "blocked"}]}

def finalize_human_review(state: AgentState):
    return {"audit_log": state.get("audit_log", []) + [{"event": "sent_to_compliance_queue"}]}

4) Assemble the LangGraph workflow

Use StateGraph, add_node, add_edge, and add_conditional_edges. This pattern is easy to test and easy to explain during model risk review.

graph = StateGraph(AgentState)

graph.add_node("get_policies", get_policies)
graph.add_node("evaluate_compliance", evaluate_compliance)
graph.add_node("log_audit", log_audit)
graph.add_node("finalize_approval", finalize_approval)
graph.add_node("finalize_rejection", finalize_rejection)
graph.add_node("finalize_human_review", finalize_human_review)

graph.add_edge(START, "get_policies")
graph.add_edge("get_policies", "evaluate_compliance")
graph.add_edge("evaluate_compliance", "log_audit")

graph.add_conditional_edges(
    "log_audit",
    route_decision,
    {
        "approved": "finalize_approval",
        "rejected": "finalize_rejection",
        "human_review": "finalize_human_review",
    },
)

graph.add_edge("finalize_approval", END)
graph.add_edge("finalize_rejection", END)
graph.add_edge("finalize_human_review", END)

app = graph.compile()

Run it with a real payload:

result = app.invoke(
    {
        "messages": [],
        "client_profile": {
            "client_id": "C123",
            "kyc_verified": True,
            "risk_tolerance": "conservative",
        },
        "request": {
            "request_id": "R456",
            "jurisdiction": "US",
            "product_type": "crypto",
            "conflict_disclosure_required": True,
            # conflict_disclosed intentionally omitted
        },
        "policies": [],
        "_decision_ignored_just_for_init_compatibility": None,
        "_audit_log_ignored_just_for_init_compatibility": None,
        # initialize required keys below
        # these are overwritten by nodes anyway
        # but keep the shape stable for testing
        # especially when you add checkpoints later
        # in production code use a cleaner initializer helper
        # to avoid noisy boilerplate
        # 
        # Note: TypedDict doesn't enforce at runtime.
        # This is about keeping your own contract consistent.
        
        # actual runtime values:
        # decision/audit_log are set by nodes
        
        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        
        
        
        
        
        
        

        
        

        
        
        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        

        
        
        
        
        
        

        
        
        
        
        
        


        
        



        
        




        
        


        
        


        
        


        
        



        
        
        






        
        
        
        
        
        
        
    

        
    
    

    
    

    
    

    
    

    
    

    
    

    
    

    
    

    
    

    
    

    
    
    
    
    
}
)
print(result["decision"])
print(result["audit_log"])

Production Considerations

  • Checkpoint every decision path
    • Use LangGraph checkpointing so compliance teams can replay a case exactly as it was evaluated.
  • Separate data residency by region
    • Keep EU client data in EU-hosted storage and avoid sending sensitive client records across borders unless legal review has approved it.
  • Add hard guardrails before execution
    • The agent should only emit approve after all mandatory checks pass. Never let an LLM directly call trade execution or client messaging tools.
  • Log for audit, not just observability
    • Store matched rule IDs, source policy version, input hashes, reviewer overrides, and timestamps. Wealth management audits care about traceability more than token counts.

Common Pitfalls

  1. Using free-form LLM output as the final compliance result

    • Fix it by forcing structured output through Pydantic models like ComplianceDecision.
    • If the model cannot produce valid JSON-like structure, fail closed and escalate.
  2. Mixing policy logic with orchestration logic

    • Keep rule evaluation in dedicated functions or services.
    • Let LangGraph handle routing; let your compliance engine handle policy interpretation.
  3. Ignoring jurisdiction-specific rules

    • A suitability check for US retail clients is not enough for cross-border wealth accounts.
    • Always key policy retrieval on jurisdiction, entity type, product class, and channel before evaluation.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides