How to Build a compliance checking Agent Using CrewAI in TypeScript for healthcare
A compliance checking agent for healthcare reviews patient-facing content, workflows, and case notes against policy before they leave your system. It matters because the failure modes are expensive: HIPAA exposure, bad audit trails, incorrect disclosures, and data residency violations that can turn a simple automation into a legal incident.
Architecture
- •
Policy knowledge source
- •Store HIPAA rules, internal compliance policies, retention rules, and approved phrasing in a versioned document set.
- •Keep this separate from runtime prompts so you can update policy without redeploying code.
- •
Compliance checker agent
- •The primary
Agentthat inspects text, flags PHI/PII leakage, checks required disclaimers, and validates whether an action is allowed. - •It should return structured findings, not free-form prose.
- •The primary
- •
Policy review tool
- •A
Toolthat retrieves the right policy section by topic, jurisdiction, or workflow type. - •This keeps the agent grounded in your approved internal rules.
- •A
- •
Audit logger
- •Persist every request, model output, policy version used, and final decision.
- •Healthcare teams need traceability for incident response and audits.
- •
Human escalation path
- •Route ambiguous or high-risk cases to a compliance reviewer.
- •The agent should recommend escalation when confidence is low or when the request touches protected data.
- •
Execution orchestrator
- •A
Crewwith one or more agents that runs the review workflow and returns a structured result to your app. - •In healthcare, deterministic handoff matters more than fancy multi-agent choreography.
- •A
Implementation
1) Install CrewAI for TypeScript and define your policy tool
You want the agent to inspect content against policy text that you control. A simple retrieval tool is enough to start; just make sure the source lives in a compliant store with access controls and residency constraints.
import { Tool } from "@crewai/core";
type PolicyLookupArgs = {
topic: string;
};
const POLICY_INDEX: Record<string, string> = {
hipaa: "Do not disclose PHI unless there is a permitted treatment/payment/operations basis.",
marketing: "No patient testimonials or treatment claims without explicit legal review.",
retention: "Keep audit logs for at least 6 years where required by policy.",
};
export const lookupPolicyTool = new Tool({
name: "lookup_policy",
description: "Fetches internal healthcare compliance guidance by topic",
func: async (input: string) => {
const args = JSON.parse(input) as PolicyLookupArgs;
return POLICY_INDEX[args.topic.toLowerCase()] ?? "No matching policy found.";
},
});
2) Create a compliance agent with strict output requirements
For healthcare workflows, don’t ask the model to “be careful.” Tell it exactly what to emit. Use structured fields so your application can gate downstream actions.
import { Agent } from "@crewai/core";
import { lookupPolicyTool } from "./tools";
export const complianceAgent = new Agent({
role: "Healthcare Compliance Reviewer",
goal:
"Review healthcare content for HIPAA exposure, missing disclaimers, improper disclosures, and policy violations.",
backstory:
"You are a compliance analyst reviewing clinical and patient-facing content before release.",
tools: [lookupPolicyTool],
verbose: true,
});
3) Define a task that forces an actionable decision
The task should produce a decision your app can use immediately: approve, reject, or escalate. Keep the output schema tight so you can store it in an audit table without parsing essays.
import { Task } from "@crewai/core";
import { complianceAgent } from "./agent";
export const reviewTask = new Task({
description: `
Review the following healthcare text for compliance issues:
{{content}}
Check for:
- PHI disclosure risk
- Missing required disclaimers
- Unapproved marketing language
- Data residency concerns if any external transfer is implied
Return JSON with:
{
"decision": "approve" | "reject" | "escalate",
"issues": string[],
"policy_refs": string[],
"summary": string
}
`,
expectedOutput:
'Valid JSON with decision, issues, policy_refs, and summary fields.',
agent: complianceAgent,
});
4) Run the crew and persist an audit record
Use a Crew to execute the task. In production, write the input hash, output hash, model metadata, and policy version into your audit log before returning anything to the caller.
import { Crew } from "@crewai/core";
import { reviewTask } from "./task";
async function main() {
const crew = new Crew({
agents: [reviewTask.agent!],
tasks: [reviewTask],
verbose: true,
});
const result = await crew.kickoff({
content:
"Patient Jane Doe asked us to email her full lab results to her personal Gmail account.",
topic: "hipaa",
policyVersion: "2026-01",
requestId: crypto.randomUUID(),
});
}
main().catch(console.error);
If you want this to be production-grade, wrap kickoff() in a service layer that validates input size, strips secrets from logs, and rejects unsupported jurisdictions before the agent runs.
Production Considerations
- •
Deployment
- •Keep the model endpoint in-region if your healthcare data residency rules require it.
- •If you process PHI, avoid sending raw payloads to third-party services unless your contracts explicitly allow it.
- •
Monitoring
- •Track decision rates by category: approve/reject/escalate.
- •Alert on spikes in escalations or repeated false approvals on PHI-heavy requests.
- •
Guardrails
- •Add deterministic pre-filters for obvious violations like SSNs, MRNs, insurance IDs, and unredacted notes.
- •Never let the agent directly execute outbound actions like email sends or record updates without human approval on high-risk cases.
- •
Auditability
- •Store prompt version, policy version, model name, timestamp, request ID, and final decision.
- •That gives you defensible traces during audits and incident reviews.
Common Pitfalls
- •
Treating the LLM output as authoritative
- •Don’t trust raw natural language responses.
- •Force JSON output and validate it before any downstream action.
- •
Embedding policy directly in prompts only
- •Prompt text drifts fast and becomes impossible to govern.
- •Put policies in versioned documents or tools so legal/compliance can update them independently.
- •
Ignoring data residency and PHI boundaries
- •A working demo that sends patient content to an external region is not shippable.
- •Classify data first, redact where possible, and block unsupported jurisdictions at the API boundary.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit