How to Build a compliance checking Agent Using AutoGen in TypeScript for fintech
A compliance checking agent reviews fintech workflows, customer communications, and transaction metadata against policy rules before anything reaches production or a human reviewer. It matters because in fintech, a missed sanction hit, KYC gap, or restricted-product disclosure can become a regulatory incident, not just a bug.
Architecture
Build this agent as a small system, not a single prompt.
- •
Policy intake layer
- •Loads compliance rules from JSON, YAML, or a policy service.
- •Keeps versioned rules for auditability.
- •
AutoGen assistant agent
- •Uses
AssistantAgentto interpret the request and map it to policy checks. - •Produces structured findings, not free-form advice.
- •Uses
- •
Compliance tools
- •Deterministic functions for sanctions screening, KYC completeness checks, product suitability checks, and jurisdiction validation.
- •These should run outside the model.
- •
Human review gate
- •Escalates borderline cases to an approver.
- •Keeps the agent from making final decisions on regulated actions.
- •
Audit logger
- •Stores inputs, outputs, policy version, model version, and tool results.
- •Needed for internal audit and regulator review.
- •
Data boundary controls
- •Redact PII before sending to the model.
- •Enforce residency by routing requests to approved regions only.
Implementation
1) Install AutoGen and define your compliance types
Use the TypeScript AutoGen package and keep your domain objects explicit. Compliance agents work better when you force structure early.
npm install @microsoft/autogen typescript ts-node zod
export type ComplianceInput = {
customerId: string;
jurisdiction: "US" | "UK" | "EU";
product: "payments" | "lending" | "crypto";
amount: number;
memo?: string;
kycStatus: "complete" | "pending" | "failed";
};
export type ComplianceFinding = {
ruleId: string;
severity: "low" | "medium" | "high";
status: "pass" | "warn" | "block";
message: string;
};
2) Implement deterministic compliance tools
Do not ask the LLM to “figure out” sanctions or KYC status. Use tools for that and let the model explain the result.
import { z } from "zod";
const inputSchema = z.object({
customerId: z.string(),
jurisdiction: z.enum(["US", "UK", "EU"]),
product: z.enum(["payments", "lending", "crypto"]),
amount: z.number().positive(),
memo: z.string().optional(),
kycStatus: z.enum(["complete", "pending", "failed"]),
});
export function evaluateCompliance(input: unknown): ComplianceFinding[] {
const data = inputSchema.parse(input);
const findings: ComplianceFinding[] = [];
if (data.kycStatus !== "complete") {
findings.push({
ruleId: "KYC-001",
severity: data.kycStatus === "failed" ? "high" : "medium",
status: data.kycStatus === "failed" ? "block" : "warn",
message: `KYC status is ${data.kycStatus}.`,
});
}
if (data.product === "crypto" && data.jurisdiction === "US") {
findings.push({
ruleId: "JUR-014",
severity: "high",
status: "warn",
message: `Crypto activity in US requires enhanced review.`,
});
}
if (data.amount >= 10000) {
findings.push({
ruleId: "AML-022",
severity: "medium",
status: "warn",
message: `Transaction exceeds manual review threshold.`,
});
}
return findings.length
? findings
: [{
ruleId: "BASE-000",
severity: "low",
status: "pass",
message: "No policy violations detected.",
}];
}
3) Wire the tool into an AutoGen AssistantAgent
The pattern here is simple: the assistant reasons over policy findings, but the actual decision signal comes from your deterministic tool output.
import { AssistantAgent } from "@microsoft/autogen";
const complianceAgent = new AssistantAgent({
name: "compliance_agent",
systemMessage:
[
"You are a fintech compliance checker.",
"Only assess the provided policy findings.",
'Return JSON with keys: decision, reasons, escalationRequired.',
'Decision must be one of: APPROVE, REVIEW, BLOCK.',
'Never invent regulations or policies.',
].join(" "),
});
export async function checkCompliance(input: ComplianceInput) {
const findings = evaluateCompliance(input);
const prompt = `
Customer context:
${JSON.stringify(input)}
Deterministic compliance findings:
${JSON.stringify(findings)}
`;
const result = await complianceAgent.generateReply([{ role: "user", content: prompt }]);
return {
input,
findings,
agentResponse:
typeof result === "string"
? result
: JSON.stringify(result),
};
}
4) Add an audit trail and human escalation
For fintech, every blocked or reviewed action needs traceability. Log the exact input snapshot, policy version, and model response.
type AuditRecord = {
requestId: string;
policyVersion: string;
modelVersion?: string;
timestamp: string;
};
export async function runComplianceCheck(input: ComplianceInput) {
const requestId = crypto.randomUUID();
type Result = Awaited<ReturnType<typeof checkCompliance>>;
const auditBase = {
requestId,
policyVersion: process.env.POLICY_VERSION ?? "unknown",
timestamp: new Date().toISOString(),
};
const result = await checkCompliance(input);
const parsed = JSON.parse(result.agentResponse);
const shouldEscalate =
parsed.decision !== undefined &&
(parsed.decision === "REVIEW" || parsed.decision === "BLOCK");
return {
...auditBase,
...result,
escalationRequired:
shouldEscalate || result.findings.some(f => f.status !== "pass"),
};
}
Production Considerations
- •
Enforce data residency
- •Keep EU customer checks in EU-hosted infrastructure.
- •Do not send raw PII across regions unless your legal basis and contracts allow it.
- •
Log for audit, not just debugging
- •Store prompt inputs, tool outputs, final decision, and policy version.
- •Make logs immutable or append-only where possible.
- •
Use hard guardrails
- •The agent should never execute payments or approve onboarding directly.
- •It can recommend APPROVE/REVIEW/BLOCK only; a workflow engine makes the final move.
- •
Monitor drift in both rules and model behavior
- •Track false positives on low-risk transactions.
- •Re-run regression tests when policies change or when you upgrade models.
Common Pitfalls
- •
Letting the LLM decide compliance from scratch
- •Avoid this by pushing all rule evaluation into deterministic code.
- •The model should summarize and classify outcomes, not invent them.
- •
Skipping versioning on policies
- •If you cannot say which rule set produced a block, your audit trail is weak.
- •Version policy files and store that version with every decision.
- •
Sending too much sensitive data to the agent
- •Redact account numbers, SSNs, passport IDs, and free-text notes where possible.
- •Pass only what is needed for the specific check.
A good fintech compliance agent is boring in the right places. Deterministic checks do the real enforcement work; AutoGen handles interpretation, explanation, and routing to humans when risk crosses your threshold.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit