How to Build a customer support Agent Using AutoGen in TypeScript for healthcare

By Cyprian AaronsUpdated 2026-04-21
customer-supportautogentypescripthealthcare

A healthcare customer support agent handles patient questions, routes requests, summarizes cases, and escalates anything clinical or risky to a human. The point is not to replace staff; it is to reduce queue time while keeping PHI controlled, auditable, and inside policy.

Architecture

  • Patient-facing entrypoint

    • Receives chat or ticket text from the portal, contact center, or email ingestion.
    • Normalizes the request into a structured payload.
  • Policy gate

    • Classifies the message before any LLM call.
    • Blocks PHI leakage, clinical advice, and unsupported workflows.
  • AutoGen agent

    • Uses AssistantAgent to draft responses, summarize cases, and propose next actions.
    • Stays constrained to support-only tasks.
  • Tool layer

    • Exposes approved operations like appointment lookup, claim status checks, FAQ retrieval, and ticket creation.
    • Never gives the model direct database access.
  • Human handoff path

    • Escalates billing disputes, symptoms, medication questions, and identity verification failures.
    • Preserves conversation context for the agent queue.
  • Audit and storage layer

    • Logs prompts, tool calls, decisions, and escalation reasons.
    • Stores data in a region that matches residency requirements.

Implementation

1) Install AutoGen for TypeScript and define your support tools

For TypeScript projects, use the AutoGen packages that expose AssistantAgent and tool registration. Keep your tools narrow; healthcare support agents should query approved systems only.

npm install @autogenai/autogen-core @autogenai/autogen-agentchat zod

Create a small tool surface for safe support actions:

import { z } from "zod";

export const LookupTicketInput = z.object({
  ticketId: z.string().min(1),
});

export async function lookupTicket(ticketId: string) {
  // Replace with your internal service call
  return {
    ticketId,
    status: "open",
    category: "billing",
    lastUpdated: new Date().toISOString(),
  };
}

export async function createEscalation(summary: string) {
  return {
    escalationId: `esc_${Date.now()}`,
    routedTo: "human-support-queue",
    summary,
  };
}

2) Build the agent with a strict healthcare system message

Use AssistantAgent for response generation and keep the system prompt explicit about scope. The model should never provide diagnosis or interpret symptoms.

import { AssistantAgent } from "@autogenai/autogen-agentchat";
import { OpenAIChatCompletionClient } from "@autogenai/autogen-core";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY!,
});

export const supportAgent = new AssistantAgent({
  name: "healthcare_support_agent",
  modelClient,
  systemMessage: `
You are a healthcare customer support agent.
You can help with appointments, claims status, billing questions, portal access, and general policy FAQs.
Do not provide medical advice, diagnosis, triage decisions, or medication guidance.
If the user mentions symptoms, urgent issues, self-harm, or asks for clinical interpretation:
- stop
- recommend contacting a clinician or emergency services as appropriate
- escalate to a human support queue
Do not request unnecessary PHI.
Always minimize data collection and produce an audit-friendly summary.
`,
});

3) Add a pre-check for PHI-sensitive requests and route to escalation

Before calling the agent, inspect the request. In healthcare systems you want deterministic guardrails outside the model so compliance does not depend on prompt behavior.

import { createEscalation } from "./tools";

function needsHumanHandoff(text: string): boolean {
  const lowered = text.toLowerCase();
  const riskyTerms = [
    "symptom",
    "pain",
    "fever",
    "medication",
    "prescription",
    "suicide",
    "self-harm",
    "emergency",
    "diagnosis",
  ];
  return riskyTerms.some((term) => lowered.includes(term));
}

export async function handleIncomingMessage(userText: string) {
  if (needsHumanHandoff(userText)) {
    const escalation = await createEscalation(
      `Clinical-risk or sensitive request detected: ${userText}`,
    );
    return {
      type: "handoff",
      escalationId: escalation.escalationId,
      message:
        "I’m routing this to a human support specialist. If this is urgent or medical in nature, contact your care team or local emergency services now.",
    };
  }

  const result = await supportAgent.run(userText);
  return {
    type: "agent_reply",
    message: result.messages.at(-1)?.content ?? "",
  };
}

4) Wire in audit logging for compliance

Healthcare teams need traceability. Log user intent classification, tool usage, final response, and escalation reason. Keep logs in your approved region and redact identifiers where possible.

type AuditEvent = {
  requestId: string;
  timestamp: string;
  eventType: "inbound" | "handoff" | "agent_reply";
  inputHash?: string;
};

function hashText(text: string): string {
  return Buffer.from(text).toString("base64").slice(0, 32);
}

export async function auditedHandle(requestId: string, userText: string) {
  const inboundEvent: AuditEvent = {
    requestId,
    timestamp: new Date().toISOString(),
    eventType: "inbound",
    inputHash: hashText(userText),
  };

  console.log(JSON.stringify(inboundEvent));

   const response = await handleIncomingMessage(userText);

   console.log(
     JSON.stringify({
       requestId,
       timestamp: new Date().toISOString(),
       eventType:
         response.type === "handoff" ? "handoff" : ("agent_reply" as const),
       inputHash: hashText(userText),
     }),
   );

   return response;
}

Production Considerations

  • Deploy inside your compliant boundary

    • Keep inference endpoints in the correct cloud region for residency requirements.
    • If you handle HIPAA-covered data, ensure your vendor agreements and controls match your legal posture.
  • Separate PHI from general conversation state

    • Store only what you need for resolution.
    • Redact names, member IDs, DOBs, and policy numbers from logs unless absolutely required.
  • Monitor handoff rates and unsafe completions

    • Track how often the agent escalates symptom-related messages.
    • Review any response that includes unsupported medical language.
  • Add deterministic guardrails before the model

ControlWhy it mattersExample
PHI redactionReduces exposure in logsMask member IDs before persistence
Topic classifierBlocks clinical adviceRoute “chest pain” to human
Tool allowlistPrevents unsafe actionsOnly permit claim lookup and ticket creation
Audit trailSupports compliance reviewStore prompt/tool/response metadata

Common Pitfalls

  1. Letting the model answer clinical questions

    • Fix this by hard-blocking symptom and medication topics before generation.
    • The system prompt is not enough on its own.
  2. Logging raw PHI everywhere

    • Fix this by redacting inputs at ingestion and storing hashes or partial tokens where possible.
    • Treat application logs as sensitive data stores.
  3. Giving the agent broad system access

    • Fix this by exposing only narrow tools like lookupTicket or createEscalation.
    • Never connect the model directly to EHR writes or unrestricted SQL.
  4. Skipping regional deployment checks

    • Fix this by pinning inference and storage to approved regions.
    • Data residency failures are usually operational mistakes, not model mistakes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides