How to Build a compliance checking Agent Using LangGraph in TypeScript for payments
A compliance checking agent for payments screens a transaction or payment request against policy before it moves forward. It catches things like sanctioned entities, missing KYC fields, unusual destination countries, and policy violations early, which matters because one bad payment can create regulatory exposure, chargeback risk, and audit pain.
Architecture
- •
Input normalizer
- •Converts raw payment payloads into a stable internal shape.
- •Normalizes currency codes, country codes, party identifiers, and free-text fields.
- •
Policy engine
- •Encodes hard rules for payments compliance.
- •Examples: sanctioned country blocks, missing beneficiary address, unsupported corridor, threshold-based escalation.
- •
Risk enrichment node
- •Pulls in external signals like customer risk tier, transaction history, and sanctions screening results.
- •Keep this deterministic where possible; do not bury core compliance decisions in an LLM.
- •
Decision node
- •Produces one of a small set of outcomes:
approve,review, orreject. - •The LLM can summarize rationale, but the final action should be constrained by policy.
- •Produces one of a small set of outcomes:
- •
Audit logger
- •Persists the full decision path.
- •For payments, you need traceability: inputs, rule hits, model output, and final decision.
- •
Human review handoff
- •Routes ambiguous or high-risk cases to an operations queue.
- •This is where you handle exceptions without blocking the whole payment rail.
Implementation
1) Define the state and decision types
Use a typed state so every node in the graph knows what it can read and write. For payments compliance, keep the state explicit: transaction data, policy findings, final decision, and audit trail.
import { Annotation } from "@langchain/langgraph";
export type PaymentDecision = "approve" | "review" | "reject";
export interface PaymentInput {
paymentId: string;
amount: number;
currency: string;
beneficiaryCountry: string;
beneficiaryName: string;
senderCountry: string;
customerId: string;
}
export interface ComplianceFinding {
ruleId: string;
severity: "low" | "medium" | "high";
message: string;
}
export const ComplianceState = Annotation.Root({
input: Annotation<PaymentInput>(),
normalized: Annotation<any>(),
findings: Annotation<ComplianceFinding[]>({
default: () => [],
reducer: (current, update) => [...current, ...update],
}),
decision: Annotation<PaymentDecision | null>({
default: () => null,
reducer: (_, update) => update,
}),
rationale: Annotation<string>({
default: () => "",
reducer: (_, update) => update,
}),
});
2) Build deterministic compliance checks first
Do not start with an LLM. For payments, hard rules should decide obvious blocks before any model is involved. This keeps behavior stable and easier to audit.
import { StateGraph, START, END } from "@langchain/langgraph";
const normalizeNode = async (state: typeof ComplianceState.State) => {
const input = state.input;
return {
normalized: {
...input,
currency: input.currency.toUpperCase(),
beneficiaryCountry: input.beneficiaryCountry.toUpperCase(),
senderCountry: input.senderCountry.toUpperCase(),
},
};
};
const policyNode = async (state: typeof ComplianceState.State) => {
const findings = [];
if (state.normalized.amount > 10000) {
findings.push({
ruleId: "THRESHOLD_10000",
severity: "medium",
message: "Transaction exceeds manual review threshold.",
});
}
if (["IR", "KP", "SY"].includes(state.normalized.beneficiaryCountry)) {
findings.push({
ruleId: "SANCTIONED_COUNTRY",
severity: "high",
message: "Beneficiary country is blocked by sanctions policy.",
});
return { findings };
}
if (!state.normalized.beneficiaryName?.trim()) {
findings.push({
ruleId: "MISSING_BENEFICIARY_NAME",
severity: "high",
message: "Beneficiary name is required.",
});
}
return { findings };
};
3) Add an LLM-backed review step only for ambiguous cases
Use LangGraph routing to send only borderline transactions to a model. In TypeScript with LangGraph JS v0.2+, addConditionalEdges is the right pattern for this kind of control flow.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: zero });
const needsReview = (state: typeof ComplianceState.State) => {
if (state.findings.some(f => f.severity === "high")) return END;
if (state.findings.some(f => f.ruleId === "THRESHOLD_10000")) return "llmReview";
return END;
};
const llmReviewNode = async (state)
=> {
const prompt =
`You are a payments compliance reviewer.
Decision must be one of approve|review|reject.
Findings:
${JSON.stringify(state.findings)}
Payment:
${JSON.stringify(state.normalized)}
Return concise rationale.`;
const response = await llm.invoke(prompt);
const text = response.content.toString().toLowerCase();
const decision =
text.includes("reject") ? "reject" :
text.includes("review") ? "review" : "approve";
return {
decision,
rationale:
response.content.toString().slice(0,
500),
};
};
const auditNode = async (state)
=> ({
rationale:
`${state.rationale}\nAudit trail captured for payment ${state.input.paymentId}.`,
});
4) Wire the graph and invoke it
The graph below runs normalization first, then policy checks. If there are no hard blocks but the case needs judgment, it routes to the LLM review node and then records an audit trail.
const graph = new StateGraph(ComplianceState)
.addNode("normalize", normalizeNode)
.addNode("policy", policyNode)
.addNode("llmReview", llmReviewNode)
.addNode("audit", auditNode)
.addEdge(START,
"normalize")
.addEdge("normalize",
"policy")
.addConditionalEdges("policy",
needsReview,
{
llmReview:
"llmReview",
[END]:
END,
})
.addEdge("llmReview",
"audit")
.addEdge("audit",
END)
.compile();
async function run() {
const result = await graph.invoke({
input:
{
paymentId:
"pay_123",
amount:
12500,
currency:
"usd",
beneficiaryCountry:
"GB",
beneficiaryName:
"Apex Trading Ltd",
senderCountry:
"US",
customerId:
"cust_456"
},
});
console.log(result.decision);
console.log(result.findings);
console.log(result.rationale);
}
run();
Production Considerations
- •
Keep compliance decisions deterministic
- •Use rules for sanctions, thresholds, missing fields, and corridor restrictions.
- •Let the model explain edge cases; do not let it override hard blocks.
- •
Persist full audit context
For each payment store:
- •
original payload
- •
normalized payload
- •
triggered rules
- •
model prompt/response if used
- •
final decision
- •
timestamp and reviewer identity if escalated
- •
Respect data residency
Payment data often cannot leave a region. If your LangGraph app calls hosted models or tools across regions, enforce locality at the transport layer and redact PII before any external call.
- •Add monitoring on decision drift
Track approval rate by corridor, false positives from manual review overrides, latency per node, and sanction-hit volume. Sudden shifts usually mean upstream schema changes or broken policy mappings.
Common Pitfalls
- •
Letting the LLM make final compliance calls
- •Avoid this by making hard-rule nodes decide
rejectimmediately. - •Use the model only for borderline cases that already passed deterministic checks.
- •Avoid this by making hard-rule nodes decide
- •
Skipping normalization
- •
"usd"vs"USD"or"uk"vs"GB"will break downstream rules. - •Normalize currencies, countries, namespaces for IDs, and nullable fields before policy evaluation.
- •
- •
No replayable audit trail
- •If you cannot reconstruct why a payment was rejected or approved, you will struggle in audits.
- •Log every node output with immutable storage and version your policies alongside the graph code.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit