How to Build a compliance checking Agent Using AutoGen in TypeScript for healthcare
A compliance checking agent for healthcare reviews patient-facing or internal text against policy before it leaves your system. It matters because one bad response can expose PHI, violate HIPAA, or create a record-retention problem that turns into an audit finding.
Architecture
- •
Policy loader
- •Pulls HIPAA, internal security policy, and local jurisdiction rules from versioned documents.
- •Keeps the agent’s behavior tied to the current compliance baseline.
- •
Compliance reviewer agent
- •Uses
AssistantAgentto inspect text and return structured findings. - •Focuses on PHI exposure, minimum-necessary language, consent language, and prohibited claims.
- •Uses
- •
Decision orchestrator
- •Uses
RoundRobinGroupChator a direct agent call flow to coordinate review and escalation. - •Routes borderline cases to a human reviewer.
- •Uses
- •
Audit logger
- •Persists prompt, model output, policy version, timestamp, and reviewer decision.
- •Gives you an evidence trail for internal audits and incident response.
- •
Redaction layer
- •Removes identifiers before any external model call.
- •Enforces data minimization and supports residency constraints.
Implementation
1) Install AutoGen for TypeScript and set up your policy types
Use the AutoGen TypeScript package and define the compliance result shape up front. For healthcare, you want deterministic outputs that are easy to log and review.
npm install @autogen-ai/autogen openai zod
import { AssistantAgent } from "@autogen-ai/autogen";
import { z } from "zod";
const ComplianceResultSchema = z.object({
verdict: z.enum(["pass", "fail", "needs_human_review"]),
findings: z.array(
z.object({
rule: z.string(),
severity: z.enum(["low", "medium", "high"]),
detail: z.string(),
})
),
});
export type ComplianceResult = z.infer<typeof ComplianceResultSchema>;
2) Create a compliance agent with a strict system message
The key pattern is to make the agent behave like a policy reviewer, not a general assistant. In healthcare, that means it should look for PHI leakage, avoid medical advice drift, and flag anything that needs human approval.
const complianceAgent = new AssistantAgent({
name: "healthcare_compliance_reviewer",
modelClient: {
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY!,
},
systemMessage: `
You are a healthcare compliance reviewer.
Check text for:
- PHI exposure under HIPAA
- minimum necessary principle violations
- unsafe medical advice or diagnosis claims
- missing consent/disclosure language
- retention/audit risks
Return ONLY valid JSON with:
{
"verdict": "pass" | "fail" | "needs_human_review",
"findings": [{"rule":"", "severity":"low|medium|high", "detail":""}]
}
`,
});
3) Run a review request and parse the result
This is the core execution path. You pass the content to review, get back structured output, then validate it before any downstream action. If you are reviewing outbound messages, block release on fail and queue needs_human_review.
async function checkCompliance(text: string): Promise<ComplianceResult> {
const response = await complianceAgent.run([
{
role: "user",
content: `Review this healthcare text for compliance:\n\n${text}`,
},
]);
const raw = response.messages.at(-1)?.content ?? "";
const parsed = ComplianceResultSchema.parse(JSON.parse(raw));
return parsed;
}
async function main() {
const draft = `
Hi John,
Your lab results indicate diabetes. Please tell your employer immediately.
Your SSN on file is incorrect.
`;
const result = await checkCompliance(draft);
console.log(JSON.stringify(result, null, 2));
if (result.verdict !== "pass") {
// block release or route to human review
process.exitCode = 1;
}
}
main();
4) Add human escalation and audit logging
In production, do not let the agent be the final authority on borderline healthcare content. Log every decision with the policy version so you can reconstruct why something was allowed or blocked later.
type AuditRecord = {
requestId: string;
policyVersion: string;
inputHash: string;
verdict: ComplianceResult["verdict"];
findings: ComplianceResult["findings"];
};
function writeAudit(record: AuditRecord) {
console.log("AUDIT", JSON.stringify(record));
}
If you need multi-agent review, use RoundRobinGroupChat with one agent checking policy interpretation and another checking patient-safety risk. That pattern works well when legal/compliance wants two independent opinions before release.
Production Considerations
- •
Data residency
- •Keep PHI in-region if your hospital or payer requires it.
- •If you must use an external model endpoint, redact identifiers first and store only hashed references in logs.
- •
Monitoring
- •Track false positives, false negatives, escalation rate, and time-to-decision.
- •Alert when the agent starts approving content that should have been blocked.
- •
Guardrails
- •Enforce JSON-only outputs with schema validation.
- •Block free-form responses from reaching users or downstream systems.
- •
Auditability
- •Store prompt version, policy version, model name, timestamp, request ID, and final decision.
- •Treat these records as regulated operational data.
Common Pitfalls
- •
Sending raw PHI to the model
Redact names, MRNs, phone numbers, addresses, dates of birth, and insurance IDs before review. If your workflow does not support redaction first, it is not ready for production healthcare use.
- •
Letting the agent make final legal decisions
The agent should flag issues; it should not replace compliance staff. Use
needs_human_reviewfor anything ambiguous or high impact. - •
No versioning on policies
If you do not pin the policy document version in every audit record, you cannot explain historical decisions during an audit. Version your rules like code and deploy them together.
- •
Relying on natural language output
Free-form text is hard to validate and easy to break downstream. Force structured JSON output and validate it with
zodbefore acting on it.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit