How to Build a compliance checking Agent Using LangGraph in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
compliance-checkinglanggraphtypescripthealthcare

A compliance checking agent for healthcare reviews patient-facing or internal content, detects policy violations, and routes risky cases for human review. In practice, it helps teams catch PHI leakage, unsafe clinical language, missing consent language, and jurisdiction-specific policy issues before anything ships or gets sent.

Architecture

  • Input normalization layer
    • Takes raw text, metadata, and context like jurisdiction, document type, and data residency requirements.
  • Policy retrieval layer
    • Pulls the relevant compliance rules for HIPAA, internal hospital policy, payer requirements, or regional privacy laws.
  • LLM-based risk analyzer
    • Classifies the content into compliance categories like PHI exposure, unsafe advice, missing disclaimers, or prohibited claims.
  • Decision graph
    • Uses LangGraph branching to decide whether to approve, redact, escalate, or block.
  • Audit logger
    • Stores every input, rule version, model output, and final decision for traceability.
  • Human review handoff
    • Sends high-risk cases to a compliance officer or legal reviewer with a structured summary.

Implementation

1. Define the state and node contracts

In LangGraph for TypeScript, keep the state explicit. For compliance workflows, you want the graph to carry the document payload, policy context, risk findings, and final action.

import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

const ComplianceState = Annotation.Root({
  text: Annotation<string>(),
  jurisdiction: Annotation<string>(),
  docType: Annotation<string>(),
  findings: Annotation<any[]>(),
  decision: Annotation<"approve" | "redact" | "escalate" | "block">(),
  auditTrail: Annotation<any[]>(),
});

const FindingSchema = z.object({
  category: z.enum(["phi", "unsafe_advice", "missing_disclaimer", "policy_violation"]),
  severity: z.enum(["low", "medium", "high"]),
  rationale: z.string(),
});

type ComplianceFinding = z.infer<typeof FindingSchema>;

This keeps the graph deterministic at the orchestration layer. The model can be probabilistic; the state shape should not be.

2. Add a policy-check node backed by an LLM

Use a structured output schema so your model returns machine-readable findings. In healthcare workflows, that matters more than free-form prose because you need auditability and consistent downstream routing.

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const checkPolicy = async (state: typeof ComplianceState.State) => {
  const prompt = `
You are a healthcare compliance reviewer.
Check the text for PHI exposure, unsafe clinical advice,
missing disclaimer language, and policy violations.

Jurisdiction: ${state.jurisdiction}
Document type: ${state.docType}

Text:
${state.text}
`;

  const structured = llm.withStructuredOutput(
    z.object({
      findings: z.array(FindingSchema),
    })
  );

  const result = await structured.invoke(prompt);

  return {
    findings: result.findings as ComplianceFinding[],
    auditTrail: [
      ...(state.auditTrail ?? []),
      {
        step: "check_policy",
        model: "gpt-4o-mini",
        jurisdiction: state.jurisdiction,
        docType: state.docType,
      },
    ],
  };
};

3. Route based on severity and build the graph

The routing logic should be conservative. For healthcare compliance agents, any high-severity PHI issue or unsafe medical guidance should escalate by default.

const routeByRisk = (state: typeof ComplianceState.State) => {
  const findings = state.findings ?? [];
  const hasHigh = findings.some((f) => f.severity === "high");
  const hasMedium = findings.some((f) => f.severity === "medium");

  if (hasHigh) return "escalate";
  if (hasMedium) return "redact";
  return "approve";
};

const redactNode = async (state: typeof ComplianceState.State) => ({
  decision: "redact" as const,
});

const escalateNode = async (state: typeof ComplianceState.State) => ({
  decision: "escalate" as const,
});

const approveNode = async (state: typeof ComplianceState.State) => ({
  decision: "approve" as const,
});

const graph = new StateGraph(ComplianceState)
  .addNode("check_policy", checkPolicy)
  .addNode("redact", redactNode)
  .addNode("escalate", escalateNode)
  .addNode("approve", approveNode)
  .addEdge("__start__", "check_policy")

Continue with conditional edges:

graph.addConditionalEdges("check_policy", routeByRisk, {
  redact: "redact",
  escalate: "escalate",
  approve: "approve",
});

graph.addEdge("redact", "__end__");
graph.addEdge("escalate", "__end__");
graph.addEdge("approve", "__end__");

const app = graph.compile();

4. Invoke it with healthcare metadata and keep an audit trail

The agent is only useful if it can explain what happened later. Store the input metadata alongside the decision so you can reconstruct why a message was blocked during an audit or incident review.

async function run() {
  const result = await app.invoke({
    text:
      "Your lab results show elevated glucose. Send me your full SSN and I can update your chart.",
    jurisdiction: "US-HIPAA",
    docType: "patient_message",
    findings: [],
    auditTrail: [],
    decision: undefined,
  });

    console.log({
    decision: result.decision,
    findings: result.findings,
    auditTrail: result.auditTrail,
   });
}

run();

Production Considerations

  • Keep PHI inside your controlled boundary
    • If you send content to an external model API, tokenize or redact identifiers first.
  • Log every decision with versioned policies
    • Store the policy bundle version, prompt version, model name, and timestamp for each run.
  • Add human review for high-risk outputs
    • Anything involving diagnosis claims, medication guidance, consent language gaps, or suspected PHI leakage should go to a reviewer.
  • Respect data residency
    • Route EU patient data to EU-hosted infrastructure only; don’t mix regions in shared queues or logs.

Common Pitfalls

  • Using free-form LLM output as the final decision

    Don’t let the model directly emit “approved” without structured checks. Use a typed schema and deterministic routing in LangGraph.

  • Skipping audit metadata

    If you don’t persist jurisdiction, document type, policy version, and model version together, you won’t be able to defend decisions later.

  • Treating all violations equally

    A missing disclaimer is not the same as exposed PHI. Separate low/medium/high severity so your graph escalates only when needed.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides