How to Build a compliance checking Agent Using LangGraph in Python for pension funds

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphpythonpension-funds

A compliance checking agent for pension funds reviews member requests, investment instructions, benefit changes, and operational workflows against policy and regulation before anything moves forward. It matters because pension operations are high-trust, heavily regulated, and expensive to unwind when a bad decision slips through.

Architecture

  • Input normalizer

    • Takes raw requests from CRM, case management, email, or API payloads.
    • Converts them into a strict schema with fields like request_type, jurisdiction, member_id, effective_date, and documents.
  • Policy retrieval layer

    • Pulls the relevant internal rules, trustee policies, fund bylaws, and jurisdiction-specific constraints.
    • For pension funds, this should separate hard rules from guidance notes.
  • Compliance decision node

    • Applies deterministic checks first.
    • Uses an LLM only for structured reasoning over ambiguous cases, never as the final authority on mandatory rules.
  • Audit trail writer

    • Stores every input, rule hit, model output, and final decision.
    • This is non-negotiable for pensions because you need explainability for trustees, auditors, and regulators.
  • Exception handler

    • Routes risky or incomplete cases to human review.
    • Handles missing KYC data, conflicting beneficiary records, residency restrictions, or benefit calculation anomalies.
  • Decision publisher

    • Emits an approval, rejection, or escalation result to downstream systems.
    • Keeps the output small and machine-readable.

Implementation

1. Define the state and compliance checks

Use a typed state so every node in the graph works with the same contract. Keep deterministic checks in Python; don’t hide hard compliance logic inside prompts.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class ComplianceState(TypedDict):
    request_type: str
    jurisdiction: str
    member_status: str
    amount: float
    docs_received: bool
    risk_flags: list[str]
    decision: str
    rationale: str

def deterministic_check(state: ComplianceState) -> ComplianceState:
    flags = []

    if state["jurisdiction"] not in {"UK", "IE", "ZA"}:
        flags.append("unsupported_jurisdiction")

    if state["request_type"] == "lump_sum" and state["member_status"] != "retired":
        flags.append("invalid_lump_sum_request")

    if not state["docs_received"]:
        flags.append("missing_supporting_documents")

    decision = "approve" if not flags else "review"
    rationale = "No hard-rule violations detected." if not flags else ", ".join(flags)

    return {
        **state,
        "risk_flags": flags,
        "decision": decision,
        "rationale": rationale,
    }

2. Add an LLM review node for ambiguous cases

Use the model only when the deterministic layer returns review. The model should return structured output that can be logged and inspected.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def llm_review(state: ComplianceState) -> ComplianceState:
    if state["decision"] != "review":
        return state

    prompt = f"""
You are reviewing a pension fund compliance case.

Request type: {state['request_type']}
Jurisdiction: {state['jurisdiction']}
Member status: {state['member_status']}
Amount: {state['amount']}
Docs received: {state['docs_received']}
Risk flags: {state['risk_flags']}

Return one of: approve, reject, escalate.
Give a short rationale.
"""

    response = llm.invoke(prompt)
    text = response.content.lower()

    if "reject" in text:
        decision = "reject"
    elif "escalate" in text:
        decision = "escalate"
    else:
        decision = "approve"

    return {
        **state,
        "decision": decision,
        "rationale": response.content,
    }

3. Build the LangGraph workflow with conditional routing

This is the actual pattern you want in production: deterministic gate first, then LLM review only when needed. StateGraph gives you explicit control over transitions.

def route_after_check(state: ComplianceState) -> str:
    return "llm_review" if state["decision"] == "review" else END

graph = StateGraph(ComplianceState)

graph.add_node("deterministic_check", deterministic_check)
graph.add_node("llm_review", llm_review)

graph.add_edge(START, "deterministic_check")
graph.add_conditional_edges(
    "deterministic_check",
    route_after_check,
    {
        "llm_review": "llm_review",
        END: END,
    },
)
graph.add_edge("llm_review", END)

app = graph.compile()

4. Run the agent and capture an auditable output

For pension funds, the final object should be easy to persist into your audit store or case management system. Don’t return free-form prose to downstream services.

input_state: ComplianceState = {
    "request_type": "lump_sum",
    "jurisdiction": "UK",
    "member_status": "active",
    "amount": 25000.0,
    "docs_received": False,
    "risk_flags": [],
    "decision": "",
    "rationale": "",
}

result = app.invoke(input_state)

print(result["decision"])
print(result["rationale"])
print(result["risk_flags"])

Production Considerations

  • Deploy in-region

    • Pension data often has residency requirements. Keep member records and prompts inside approved regions and avoid sending raw PII to external services unless your legal basis and vendor contracts allow it.
  • Log every transition

    • Store input state, node outputs, model version, prompt template version, and final decision.
    • Auditors will ask why a case was escalated six months later. Your logs need to answer that without reconstruction work.
  • Put hard guardrails before the model

    • Enforce eligibility rules, contribution caps, retirement age checks, and jurisdiction filters in code.
    • The LLM should explain or classify edge cases; it should not decide statutory eligibility.
  • Separate sensitive fields

    • Mask national IDs, bank details, beneficiary names where possible.
    • Use tokenization or field-level encryption so your compliance agent does not become a data leakage path.

Common Pitfalls

  • Letting the LLM make final compliance decisions

    • This is the fastest way to create regulatory exposure.
    • Fix it by using deterministic validation first and restricting the model to escalation or explanation on ambiguous cases.
  • Ignoring jurisdiction-specific rules

    • Pension compliance is not generic finance compliance. UK auto-enrolment rules are not South African withdrawal rules.
    • Fix it by routing policy retrieval by jurisdiction before any reasoning step.
  • No audit-grade traceability

    • If you cannot show which rule triggered which outcome, your workflow is weak for trustees and regulators.
    • Fix it by persisting graph inputs/outputs per node plus timestamps and model metadata.
  • Sending full member records into prompts

    • That creates unnecessary privacy risk.
    • Fix it by passing only minimum necessary fields into LangGraph nodes and redacting everything else before model calls.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides