How to Build a compliance checking Agent Using LangChain in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglangchaintypescripthealthcare

A compliance checking agent for healthcare reviews text, policies, and patient-facing content against rules like HIPAA, internal policy, and jurisdiction-specific handling requirements. It matters because a small mistake — exposing PHI, missing consent language, or sending data to the wrong region — turns into legal risk, audit findings, and operational drag.

Architecture

  • Policy loader
    • Pulls compliance rules from versioned sources: HIPAA policy docs, internal SOPs, retention rules, and regional constraints.
  • Document intake layer
    • Accepts claims notes, prior auth drafts, patient messages, or workflow text as input.
  • LLM compliance chain
    • Uses LangChain to classify risk, extract violations, and produce structured findings.
  • Retrieval layer
    • Fetches the relevant policy snippets before checking the content so the agent reasons against the current rule set.
  • Audit logger
    • Stores input hash, policy version, model version, decision output, and reviewer overrides.
  • Human review gate
    • Routes high-risk cases to compliance staff instead of auto-approving them.

Implementation

1) Install the LangChain packages you actually need

For TypeScript, keep the stack small and explicit. You want langchain for chains and prompts, plus a model provider package such as @langchain/openai.

npm install langchain @langchain/openai zod

Set your environment variables:

export OPENAI_API_KEY="your-key"

2) Define a structured compliance output

Healthcare workflows need deterministic outputs. Don’t ask the model for free-form prose when you need an auditable decision.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "langchain/prompts";
import { z } from "zod";
import { StructuredOutputParser } from "langchain/output_parsers";

const ComplianceSchema = z.object({
  verdict: z.enum(["pass", "review", "fail"]),
  risks: z.array(z.string()),
  citedPolicy: z.array(z.string()),
  rationale: z.string(),
});

const parser = StructuredOutputParser.fromZodSchema(ComplianceSchema);

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    `You are a healthcare compliance checker.
Check for PHI exposure, missing consent language, improper sharing,
and data residency concerns. Only use the provided policy context.`,
  ],
  [
    "human",
    `Policy context:
{policyContext}

Content:
{content}

Return your answer in this format:
{format_instructions}`,
  ],
]);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function checkCompliance(content: string, policyContext: string) {
  const formatted = await prompt.format({
    content,
    policyContext,
    format_instructions: parser.getFormatInstructions(),
  });

  const response = await model.invoke(formatted);
  return parser.parse(response.content as string);
}

This pattern gives you a typed result that can be logged and reviewed. In healthcare systems, that matters more than clever prompting.

3) Add retrieval so the agent checks against current policy

The agent should not rely on memory. Use retrieval to inject the relevant HIPAA/internal policy snippets before every decision.

import { Document } from "@langchain/core/documents";

const policies = [
  new Document({
    pageContent:
      "Do not include PHI in unsecured channels. Minimum necessary standard applies.",
    metadata: { id: "hipaa-minimum-necessary", version: "2025-01" },
  }),
  new Document({
    pageContent:
      "Patient messages containing lab results must be routed through approved portals only.",
    metadata: { id: "portal-routing", version: "2025-01" },
  }),
];

function retrieveRelevantPolicies(content: string) {
  // Replace with vector search in production.
  const keywords = ["PHI", "lab", "patient", "diagnosis", "name", "dob"];
  return policies.filter((doc) =>
    keywords.some((k) => content.toLowerCase().includes(k.toLowerCase()))
      ? true
      : false
  );
}

async function checkWithPolicies(content: string) {
  const docs = retrieveRelevantPolicies(content);
  const policyContext = docs
    .map((d) => `[${d.metadata.id} v${d.metadata.version}] ${d.pageContent}`)
    .join("\n");

  return checkCompliance(content, policyContext);
}

In production you would swap this for a vector store or keyword retriever. The important part is that policy text is injected at runtime and versioned.

4) Route high-risk cases to human review

A compliance checker should not auto-decide everything. If it sees PHI leakage or ambiguous consent language, escalate.

type ReviewAction = {
  status: "auto-approved" | "needs-review" | "blocked";
};

function routeDecision(verdict: string): ReviewAction {
  if (verdict === "pass") return { status: "auto-approved" };
  if (verdict === "review") return { status: "needs-review" };
  return { status: "blocked" };
}

async function runAgent(content: string) {
  const result = await checkWithPolicies(content);
  const action = routeDecision(result.verdict);

const auditRecord = {
    timestamp: new Date().toISOString(),
    contentHash: Buffer.from(content).toString("base64"),
    verdict: result.verdict,
    risks: result.risks,
    citedPolicy: result.citedPolicy,
    action: action.status,
};

console.log(JSON.stringify(auditRecord));
return { result, action };
}

That audit record is not optional in healthcare. You need traceability for why a piece of content was blocked or escalated.

Production Considerations

  • Data residency
    • Keep PHI inside approved regions and vendors.
    • If your org requires US-only processing or tenant isolation by geography, enforce that at the transport and storage layers.
  • Monitoring
    • Log verdict distribution, escalation rate, false positives, and reviewer overrides.
    • Watch for drift when policies change or clinicians start using new phrasing.
  • Guardrails
    • Redact obvious identifiers before sending text to the model when possible.
    • Block unsupported requests like diagnosis generation or treatment recommendations unless your scope explicitly allows them.
  • Auditability
    • Store policy version IDs alongside every decision.
    • Keep immutable logs of inputs, outputs, reviewer actions, and model versions.

Common Pitfalls

  1. Using free-form output instead of structured decisions

    • This makes audits painful and breaks downstream automation.
    • Use StructuredOutputParser with a Zod schema so every response has a predictable shape.
  2. Checking content without current policy context

    • A static system prompt is not enough when policies change quarterly.
    • Retrieve versioned policy snippets at runtime and include them in every evaluation.
  3. Sending raw PHI to external services without controls

    • That creates residency and privacy problems fast.
    • Redact where possible, restrict vendors/contracts carefully, and keep sensitive processing inside approved environments.
  4. Auto-approving borderline cases

    • Healthcare compliance is not a binary yes/no problem in many workflows.
    • Use review as a first-class outcome and route ambiguous cases to humans with full context.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides