How to Build a compliance checking Agent Using LangGraph in Python for fintech
A compliance checking agent in fintech reviews transactions, customer actions, or support requests against policy rules before they are executed or escalated. It matters because the cost of a bad decision is not just a failed workflow; it can be regulatory exposure, audit findings, and real money lost.
Architecture
A production compliance agent built with LangGraph usually needs these components:
- •
Input normalizer
- •Converts raw transaction or case data into a consistent schema.
- •Handles missing fields, currency normalization, and country codes.
- •
Policy engine node
- •Applies deterministic checks first.
- •Examples: sanctions screening flags, transaction limits, KYC status, high-risk jurisdiction rules.
- •
LLM interpretation node
- •Used only where policy text needs classification or explanation.
- •Never let the model make the final compliance decision alone.
- •
Escalation router
- •Routes borderline cases to manual review.
- •Sends clear reasons and evidence to an analyst queue.
- •
Audit logger
- •Captures inputs, outputs, rule hits, and final decision.
- •Required for traceability and regulator review.
- •
Decision store
- •Persists approved/rejected/escalated outcomes with timestamps and policy version.
Implementation
1) Define the graph state and deterministic checks
Use TypedDict for state and keep the core compliance logic explicit. The LLM should assist with interpretation, not replace hard rules.
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
Decision = Literal["approve", "reject", "escalate"]
class ComplianceState(TypedDict):
transaction_id: str
amount: float
country: str
customer_kyc_status: str
is_sanctioned_country: bool
risk_score: int
decision: Decision
reasons: list[str]
def normalize_input(state: ComplianceState) -> ComplianceState:
state["country"] = state["country"].upper()
state["reasons"] = []
return state
def rule_check(state: ComplianceState) -> ComplianceState:
if state["customer_kyc_status"] != "verified":
state["decision"] = "reject"
state["reasons"].append("KYC not verified")
return state
if state["is_sanctioned_country"]:
state["decision"] = "reject"
state["reasons"].append("Sanctioned jurisdiction")
return state
if state["amount"] >= 10000 or state["risk_score"] >= 80:
state["decision"] = "escalate"
state["reasons"].append("High-value or high-risk transaction")
return state
state["decision"] = "approve"
state["reasons"].append("Passed deterministic checks")
return state
2) Add an LLM review node for ambiguous cases
This pattern is useful when policy language is messy, such as interpreting free-text merchant descriptions or support notes. Keep the output constrained to a small set of labels.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def llm_review(state: ComplianceState) -> ComplianceState:
prompt = (
f"Review this fintech compliance case.\n"
f"Transaction ID: {state['transaction_id']}\n"
f"Country: {state['country']}\n"
f"Amount: {state['amount']}\n"
f"KYC: {state['customer_kyc_status']}\n"
f"Known reasons: {state['reasons']}\n\n"
f"Return one label only: approve, reject, escalate."
)
response = llm.invoke(prompt).content.strip().lower()
if response in ("approve", "reject", "escalate"):
state["decision"] = response # type: ignore[assignment]
state["reasons"].append(f"LLM review suggested {response}")
else:
state["decision"] = "escalate"
state["reasons"].append("LLM returned invalid label; escalated")
return state
3) Wire routing with StateGraph.add_conditional_edges
This is where LangGraph earns its keep. Deterministic rules decide whether the LLM is even allowed into the flow.
def route_after_rules(state: ComplianceState) -> str:
if state["decision"] == "escalate":
return "llm_review"
return "finalize"
def finalize(state: ComplianceState) -> ComplianceState:
# Replace this with database write + audit event emission.
print(
{
"transaction_id": state["transaction_id"],
"decision": state["decision"],
"reasons": state["reasons"],
}
)
return state
graph = StateGraph(ComplianceState)
graph.add_node("normalize_input", normalize_input)
graph.add_node("rule_check", rule_check)
graph.add_node("llm_review", llm_review)
graph.add_node("finalize", finalize)
graph.set_entry_point("normalize_input")
graph.add_edge("normalize_input", "rule_check")
graph.add_conditional_edges(
"rule_check",
route_after_rules,
{
"llm_review": "llm_review",
"finalize": "finalize",
},
)
graph.add_edge("llm_review", "finalize")
graph.add_edge("finalize", END)
app = graph.compile()
4) Execute with a real case payload
Keep your runtime input small and auditable. In fintech, every field should be explainable later.
result = app.invoke(
{
"transaction_id": "tx_10001",
"amount": 12500.0,
"country": "gb",
"customer_kyc_status": "verified",
"is_sanctioned_country": False,
"risk_score": 72,
"decision": "approve",
"reasons": [],
}
)
print(result["decision"])
print(result["reasons"])
Production Considerations
- •
Separate policy versions from code deploys
- •Store thresholds and rule sets in config or a policy service.
- •Log the active policy version with every decision for audits.
- •
Keep sensitive data in-region
- •For data residency, run model calls in approved regions only.
- •Avoid sending raw PII unless you have a documented legal basis and retention policy.
- •
Add immutable audit trails
- •Persist input hashes, rule hits, model outputs, timestamps, and human overrides.
- •Regulators care about why a transaction was blocked as much as the block itself.
- •
Put hard guardrails before any model call
- •Sanctions hits, blocked jurisdictions, and failed KYC should short-circuit immediately.
- •Do not let an LLM “reinterpret” mandatory controls.
Common Pitfalls
- •
Letting the LLM make final compliance decisions
- •Fix it by using deterministic rules first and reserving the model for escalation or explanation.
- •
Not logging policy context
- •Fix it by storing rule version, prompt version, model name, and full decision trace.
- •Without that, you cannot defend the outcome in an audit.
- •
Sending unnecessary PII to external models
- •Fix it by redacting account numbers, names, addresses, and document IDs before invocation.
- •In fintech, least-data exposure is not optional.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit