How to Build a compliance checking Agent Using LangChain in TypeScript for wealth management
A compliance checking agent for wealth management reviews client communications, proposed trades, portfolio recommendations, and advisor notes against internal policy and regulatory rules before anything gets sent or executed. It matters because the cost of a missed suitability issue, restricted security violation, or incomplete disclosure is not just operational cleanup; it is regulatory exposure, client harm, and audit pain.
Architecture
- •
Input normalizer
- •Takes advisor text, CRM notes, trade proposals, or email drafts.
- •Converts them into a structured compliance payload with client profile, product metadata, jurisdiction, and timestamp.
- •
Policy retrieval layer
- •Pulls the relevant house rules, supervision policies, and jurisdiction-specific obligations.
- •Uses vector search or keyword retrieval so the agent checks the right rule set for the right client.
- •
LLM compliance analyzer
- •Uses LangChain to classify risk, extract violations, and explain why something is non-compliant.
- •Produces structured output so downstream systems can route approvals or block actions.
- •
Decision engine
- •Applies deterministic thresholds for hard blocks vs. soft warnings.
- •Separates “needs human review” from “safe to proceed.”
- •
Audit logger
- •Stores input, retrieved policy snippets, model output, and final decision.
- •Keeps an immutable trail for supervision and exam readiness.
- •
Escalation interface
- •Routes high-risk cases to compliance officers in Slack, email, or an internal case system.
- •Includes enough context to make review fast without exposing unnecessary client data.
Implementation
- •Install dependencies and define your compliance schema
Use LangChain’s structured output so the model returns predictable fields. For wealth management workflows, you want explicit flags for suitability, restricted securities, concentration risk, disclosure gaps, and escalation status.
npm install langchain @langchain/openai zod
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const ComplianceResultSchema = z.object({
overallRisk: z.enum(["low", "medium", "high"]),
violations: z.array(z.string()),
rationale: z.string(),
requiresHumanReview: z.boolean(),
recommendedAction: z.enum(["approve", "revise", "escalate"]),
});
type ComplianceResult = z.infer<typeof ComplianceResultSchema>;
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
- •Build a prompt that encodes wealth-management rules
Keep the prompt narrow. The agent should evaluate against policy text you provide at runtime instead of inventing regulations from memory.
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
[
"You are a compliance checking agent for a wealth management firm.",
"Check for suitability issues, restricted securities, missing disclosures, ",
"conflicts of interest, concentration risk, and jurisdiction-specific concerns.",
"If policy evidence is insufficient, require human review.",
"Return only structured output.",
].join(" "),
],
[
"human",
`Client profile:
{clientProfile}
Proposed action:
{proposedAction}
Relevant policy excerpts:
{policyExcerpts}
Task:
Assess whether this action is compliant. Flag any violations and recommend approve/revise/escalate.`,
],
]);
- •Chain the prompt to the model with structured parsing
This is the core pattern. withStructuredOutput() gives you typed output without hand-parsing JSON strings.
const structuredModel = llm.withStructuredOutput(ComplianceResultSchema);
export async function checkCompliance(input: {
clientProfile: string;
proposedAction: string;
policyExcerpts: string;
}): Promise<ComplianceResult> {
const chain = prompt.pipe(structuredModel);
const result = await chain.invoke({
clientProfile: input.clientProfile,
proposedAction: input.proposedAction,
policyExcerpts: input.policyExcerpts,
});
return result;
}
// Example usage
const result = await checkCompliance({
clientProfile:
"Client age 67. Conservative risk profile. Taxable account. Resident in Ontario.",
proposedAction:
"Recommend allocating 35% of liquid net worth into a single leveraged ETF tied to crypto miners.",
policyExcerpts:
"- Concentration above 20% in speculative assets requires escalation.\n" +
"- Leveraged products require documented suitability analysis.\n" +
"- Ontario clients require disclosure of product risks before recommendation.",
});
console.log(result);
- •Add deterministic gating before execution
Do not let the LLM make the final business decision alone. Use simple rules to block obvious problems and send borderline cases to review.
export function applyDecision(result: ComplianceResult) {
if (result.overallRisk === "high" || result.requiresHumanReview) {
return { status: "escalate", reason: result.rationale };
}
if (result.recommendedAction === "revise") {
return { status: "hold", reason: result.rationale };
}
return { status: "approve", reason: result.rationale };
}
Production Considerations
- •
Deploy with data residency controls
- •Keep client PII and portfolio data in-region if your firm operates under Canadian or EU residency requirements.
- •Redact unnecessary identifiers before sending context to the model.
- •
Log every decision path
- •Store input hash, retrieved policy IDs, model version, output schema version, and final decision.
- •Auditors care about reproducibility more than clever prompts.
- •
Use guardrails for hard stops
- •Block actions involving restricted lists, insider-trading sensitivity windows, missing KYC/AML fields, or unsigned disclosures before LLM evaluation.
- •The agent should review; it should not override mandatory controls.
- •
Monitor drift by case type
- •Track false positives on common advisory flows like rebalancing or tax-loss harvesting.
- •Track false negatives aggressively on high-risk products such as options overlays, private placements, and leveraged ETFs.
Common Pitfalls
- •
Treating the LLM as the source of truth
- •The model should interpret policy text; it should not invent policy.
- •Fix this by retrieving approved policy excerpts from your internal knowledge base first.
- •
Returning free-form text instead of structured decisions
- •Free-form outputs are hard to route into case management systems.
- •Fix this with
withStructuredOutput()and a Zod schema.
- •
Skipping human review on ambiguous cases
- •Wealth management has too many edge cases for full automation.
- •Fix this by escalating when policy evidence is incomplete or when suitability depends on missing client facts like liquidity needs or time horizon.
A solid compliance agent in this domain is not about making the model smarter. It is about combining LangChain orchestration with strict schemas, deterministic controls, auditability, and jurisdiction-aware policy retrieval so advisors can move fast without creating supervisory risk.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit