How to Build a compliance checking Agent Using LangGraph in TypeScript for wealth management
A compliance checking agent for wealth management reviews client-facing content, portfolio actions, and advisor recommendations against policy rules before anything is sent to a client or booked into a system. It matters because one missed suitability check, one prohibited phrase, or one unlogged exception can turn into regulatory exposure, client harm, and an ugly audit trail.
Architecture
Build this agent around a small set of deterministic components:
- •
Input normalizer
- •Takes advisor notes, draft emails, trade rationale, or meeting summaries.
- •Converts them into a structured request the graph can inspect.
- •
Policy engine
- •Encodes wealth management rules: suitability, concentration limits, restricted securities, marketing language restrictions, KYC/AML flags.
- •Returns rule hits with severity and rationale.
- •
Evidence retriever
- •Pulls client profile data: risk tolerance, investment objectives, jurisdiction, accredited investor status, product restrictions.
- •Must be read-only and auditable.
- •
Decision node
- •Aggregates rule results and decides
approve,escalate, orblock. - •Never “auto-fixes” compliance issues without a human review path.
- •Aggregates rule results and decides
- •
Audit logger
- •Persists inputs, outputs, rule hits, timestamps, and model/version metadata.
- •Needed for internal review and regulatory exams.
- •
Human escalation path
- •Routes ambiguous cases to compliance ops or a registered principal.
- •Includes reason codes and the exact text that triggered the review.
Implementation
1) Define the state and compliance checks
Use LangGraph’s Annotation.Root to define typed state. Keep the state explicit; compliance systems fail when developers hide important fields inside opaque blobs.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
type RiskLevel = "low" | "medium" | "high";
type Decision = "approve" | "escalate" | "block";
const ComplianceState = Annotation.Root({
inputText: Annotation<string>(),
clientId: Annotation<string>(),
jurisdiction: Annotation<string>(),
riskTolerance: Annotation<RiskLevel>(),
restrictedList: Annotation<string[]>(),
findings: Annotation<{ rule: string; severity: "low" | "medium" | "high"; detail: string }[]>(),
decision: Annotation<Decision>(),
auditTrail: Annotation<string[]>(),
});
function checkRestrictedSecurities(text: string, restrictedList: string[]) {
const hits = restrictedList.filter((name) =>
text.toLowerCase().includes(name.toLowerCase())
);
return hits.map((name) => ({
rule: "restricted_security",
severity: "high" as const,
detail: `Mentioned restricted security: ${name}`,
}));
}
function checkSuitability(text: string, riskTolerance: RiskLevel) {
if (riskTolerance === "low" && /leveraged|options|margin/i.test(text)) {
return [{
rule: "suitability",
severity: "high" as const,
detail: "High-risk product language detected for low-risk client",
}];
}
return [];
}
2) Add graph nodes for retrieval, evaluation, and decisioning
The pattern here is simple: retrieve facts first, then evaluate policy deterministically. If you need an LLM later for summarization or explanation generation, keep it outside the core decision path.
const retrieveClientProfile = async (state: typeof ComplianceState.State) => {
// Replace with CRM / portfolio system lookup
return {
riskTolerance: state.riskTolerance ?? "medium",
restrictedList: state.restrictedList ?? ["XYZ Fund", "ABC Notes"],
auditTrail: [...(state.auditTrail ?? []), `Loaded profile for ${state.clientId}`],
};
};
const evaluateCompliance = async (state: typeof ComplianceState.State) => {
const findings = [
...checkRestrictedSecurities(state.inputText ?? "", state.restrictedList ?? []),
...checkSuitability(state.inputText ?? "", state.riskTolerance ?? "medium"),
];
const hasHighSeverity = findings.some((f) => f.severity === "high");
const decision =
findings.length === 0 ? "approve" : hasHighSeverity ? "block" : "escalate";
return {
findings,
decision,
auditTrail: [
...(state.auditTrail ?? []),
`Evaluated compliance with ${findings.length} finding(s)`,
`Decision=${decision}`,
],
};
};
const routeToHumanReview = async (state: typeof ComplianceState.State) => {
return {
auditTrail: [...(state.auditTrail ?? []), `Escalated for human review`],
decision: "escalate" as const,
};
};
3) Wire the graph with conditional routing
This is where LangGraph earns its keep. Use StateGraph, add nodes with addNode, connect them with addEdge, then branch using addConditionalEdges.
const workflow = new StateGraph(ComplianceState)
.addNode("retrieveClientProfile", retrieveClientProfile)
.addNode("evaluateCompliance", evaluateCompliance)
.addNode("routeToHumanReview", routeToHumanReview)
workflow.addEdge(START, "retrieveClientProfile");
workflow.addEdge("retrieveClientProfile", "evaluateCompliance");
workflow.addConditionalEdges(
"evaluateCompliance",
(state) => state.decision,
{
approve: END,
block: END,
escalate: "routeToHumanReview",
}
);
workflow.addEdge("routeToHumanReview", END);
const app = workflow.compile();
4) Invoke it with real request data and persist the output
For wealth management workflows, store the result alongside the request payload and model/version metadata. If a regulator asks why something was blocked or approved, you need the exact chain of reasoning.
async function run() {
const result = await app.invoke({
inputText:
"Recommend ABC Notes to this low-risk retiree client as part of a diversified income strategy.",
clientId: "client_123",
jurisdiction: "US",
riskTolerance: "low",
restrictedList: ["ABC Notes", "Private Credit Fund"],
findings: [],
decision: undefined as unknown as Decision,
auditTrail: [],
});
console.log(JSON.stringify(result, null, 2));
}
run().catch(console.error);
Production Considerations
- •
Deploy in-region
- •Wealth management data often has residency constraints.
- •Keep client PII and portfolio context in approved regions and avoid sending raw records to external services unless your legal team has signed off.
- •
Log every decision path
- •Persist input hash, extracted facts, policy version, graph version, timestamp UTC, and final decision.
- •Auditors care about reproducibility more than model elegance.
- •
Add guardrails before any LLM step
| Guardrail | Why it matters |
|---|---|
| Deterministic policy checks first | Prevents an LLM from overriding hard rules |
| Human escalation on ambiguity | Required for borderline suitability cases |
| Redaction of PII in logs | Reduces privacy exposure |
| Versioned policy bundles | Lets you prove which rules were active at decision time |
- •Separate advisory content from execution authority
| Workflow type | Allowed action |
|---|---|
| Draft email review | Approve / block / escalate |
| Trade recommendation review | Approve / escalate only |
| Order placement | Never automatic from the agent |
Common Pitfalls
- •Using the LLM as the final arbiter
Only use it for summarization or explanation generation. Suitability checks, restricted list matching, and jurisdiction rules should be deterministic code.
- •Not versioning policies
If your concentration limit changes from 25% to 20%, you need both versions preserved. Store policy version in every audit record so historical decisions remain defensible.
- •Ignoring exception handling paths
A compliance agent that only returns approve/block is incomplete. Wealth management operations need escalation with reason codes like suitability_uncertain, restricted_product_match, or jurisdiction_mismatch.
- •Letting unredacted client data leak into traces
Trace logs are useful during development and dangerous in production. Strip account numbers, tax IDs, addresses, and free-text notes before they hit observability tools.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit