How to Build a compliance checking Agent Using CrewAI in TypeScript for investment banking
A compliance checking agent in investment banking reviews proposed client communications, trade-related content, marketing copy, or internal research notes against policy and regulation before they leave the firm. It matters because one missed disclosure, one prohibited claim, or one data residency violation can turn into regulatory findings, legal exposure, or blocked deals.
Architecture
- •
Input normalizer
- •Takes raw text, email drafts, PDFs converted to text, or structured request payloads.
- •Extracts the business context: product type, jurisdiction, client type, and channel.
- •
Policy retrieval layer
- •Pulls the relevant controls from an internal policy store.
- •Maps content to rules like MiFID II disclosures, FINRA communications standards, SEC marketing rules, and bank-specific approval workflows.
- •
Compliance analysis agent
- •Uses CrewAI
Agentplus tools to inspect the content against policy. - •Produces a decision: pass, needs review, or reject.
- •Uses CrewAI
- •
Audit logger
- •Stores input hash, policy version, model version, decision rationale, and reviewer trail.
- •Required for supervision and post-trade/communications audits.
- •
Escalation workflow
- •Routes high-risk cases to a human compliance officer.
- •Handles exceptions for restricted lists, MNPI risk, and jurisdictional conflicts.
Implementation
- •Set up the project and define the compliance tools
You want the agent to work from firm policy data instead of memory. In practice that means using tools for policy lookup and audit logging.
import { Agent } from "@crewai/core";
import { Tool } from "@crewai/core";
type ComplianceContext = {
jurisdiction: string;
product: string;
channel: string;
};
const policyLookupTool = new Tool({
name: "policy_lookup",
description: "Fetches relevant compliance rules for a given jurisdiction/product/channel.",
func: async (input: string) => {
const ctx = JSON.parse(input) as ComplianceContext;
// Replace with real policy service call
return JSON.stringify({
rules: [
`Include required risk disclosures for ${ctx.product}`,
`Check promotional language for prohibited guarantees`,
`Verify approval path for ${ctx.channel} in ${ctx.jurisdiction}`,
],
policyVersion: "2026.01",
});
},
});
const auditLogTool = new Tool({
name: "audit_log",
description: "Writes immutable audit events for compliance decisions.",
func: async (input: string) => {
// Replace with append-only log / SIEM / WORM storage
console.log("AUDIT_EVENT", input);
return "ok";
},
});
- •Create the compliance agent with a narrow role
Keep the agent focused. For investment banking use cases, it should not draft content; it should only assess risk and explain why.
const complianceAgent = new Agent({
name: "InvestmentBankingComplianceChecker",
role: "Compliance review agent for investment banking communications",
goal:
"Detect regulatory issues, missing disclosures, prohibited claims, and escalation triggers.",
backstory:
"You review bank-approved content under strict supervision requirements. You cite policy findings clearly and never invent rules.",
tools: [policyLookupTool, auditLogTool],
});
- •Run a structured review task
CrewAI tasks should produce deterministic output formats that downstream systems can parse. For production use, require JSON output with fields your workflow engine understands.
import { Task } from "@crewai/core";
import { Crew } from "@crewai/core";
const reviewTask = new Task({
description: `
Review the following draft for investment banking compliance issues.
Draft:
{draft}
Context:
Jurisdiction: {jurisdiction}
Product: {product}
Channel: {channel}
Return JSON with:
- decision: pass | needs_human_review | reject
- issues: array of strings
- required_changes: array of strings
- rationale: short explanation
`,
expectedOutput:
'{"decision":"needs_human_review","issues":["..."],"required_changes":["..."],"rationale":"..."}',
});
- •Execute the crew and persist the result
This is the pattern you actually wire into your API endpoint or workflow worker. The important part is that every decision gets logged with context and versioning.
async function runComplianceCheck(draft: string) {
const crew = new Crew({
agents: [complianceAgent],
tasks: [reviewTask],
verbose: true,
});
const result = await crew.kickoff({
draft,
jurisdiction: "UK",
product: "structured note",
channel: "client_email",
});
await auditLogTool.func(
JSON.stringify({
eventType: "compliance_review_completed",
jurisdiction: "UK",
product: "structured note",
channel: "client_email",
decisionResult: result,
timestamp: new Date().toISOString(),
})
);
return result;
}
If you want a stricter production pattern, wrap the output parsing before allowing release:
const response = await runComplianceCheck(
"We guarantee principal protection with no downside risk."
);
const parsed = JSON.parse(response.toString());
if (parsed.decision === "pass") {
// release to publishing pipeline
} else if (parsed.decision === "needs_human_review") {
// route to compliance queue
} else {
// block send
}
Production Considerations
- •
Deploy inside your controlled network boundary
- •Keep policy services, audit logs, and model endpoints in approved regions.
- •For regulated banks, data residency matters more than latency.
- •
Version everything
- •Store policy version, prompt version, model version, and tool versions with every decision.
- •If Legal asks why a message passed last month but fails today, you need traceability.
- •
Add hard guardrails outside the model
- •Block restricted phrases before generation.
- •Enforce jurisdiction filters in code before calling CrewAI.
- •Do not let the agent override mandatory human approval thresholds.
- •
Monitor drift on real cases
- •Track false positives on standard disclosures.
- •Track false negatives on banned claims and missing legends.
- •Review samples weekly with Compliance Ops.
Common Pitfalls
- •
Using a generic prompt without firm policies
The agent will sound confident and still miss bank-specific obligations. Always connect it to an internal rule source through tools or retrieval.
- •
Letting the agent make final release decisions on high-risk content
Client-facing material tied to products like derivatives, structured notes, or cross-border distribution should often require human sign-off. Use the agent as a screening layer, not the final approver.
- •
Skipping auditability
If you cannot reconstruct what was checked and against which rule set, the system will fail governance review. Log input hashes, outputs, timestamps, approvers, and policy versions in immutable storage.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit