How to Build a customer support Agent Using LangGraph in TypeScript for healthcare
A healthcare customer support agent handles patient-facing questions like appointment status, billing confusion, prior authorization updates, and clinic policy FAQs. The important part is not just answering fast; it has to do it with PHI-safe routing, auditability, and escalation paths that keep regulated data out of places it should not go.
Architecture
- •
Input gateway
- •Accepts chat or ticket payloads from your web app, contact center, or patient portal.
- •Normalizes metadata like
patientId,locale,region, andcaseType.
- •
Policy and PHI guardrail node
- •Detects whether the user is asking for medical advice, account access, or claims details.
- •Blocks unsafe requests and routes anything sensitive to human review or a restricted workflow.
- •
Intent router
- •Classifies the request into support buckets:
- •appointment
- •billing
- •insurance eligibility
- •prescription refill status
- •general FAQ
- •Classifies the request into support buckets:
- •
Tool execution layer
- •Calls internal systems like scheduling APIs, CRM/ticketing, claims lookup, or knowledge base search.
- •Keeps each tool narrow and auditable.
- •
Response composer
- •Turns tool results into a concise answer.
- •Applies healthcare-safe language: no diagnosis, no treatment advice, no unsupported claims.
- •
Escalation path
- •Sends edge cases to a human agent with full trace context.
- •Preserves conversation state without exposing unnecessary PHI.
Implementation
1) Define the state and the tools
In LangGraph, your graph state should carry only what the workflow needs. For healthcare support, keep PHI out of the state unless a downstream node truly requires it.
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, START, END } from "@langchain/langgraph";
const SupportState = z.object({
message: z.string(),
intent: z.string().default("unknown"),
risk: z.enum(["low", "medium", "high"]).default("low"),
answer: z.string().default(""),
});
type SupportStateType = z.infer<typeof SupportState>;
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function lookupFaq(query: string): Promise<string> {
return `FAQ result for: ${query}`;
}
async function createHumanHandoff(reason: string): Promise<string> {
return `Escalated to human support because: ${reason}`;
}
2) Build nodes for classification, routing, and response generation
Use small nodes with one responsibility each. That makes audit logs easier and reduces the blast radius when something goes wrong.
const classifyNode = async (state: SupportStateType) => {
const prompt = `Classify this healthcare support request into one of:
appointment | billing | insurance | prescription | faq | medical_advice | access_issue
Message: ${state.message}`;
const result = await llm.invoke(prompt);
const intent = result.content.toString().toLowerCase();
return {
intent,
risk: intent === "medical_advice" || intent === "access_issue" ? "high" : "low",
};
};
const faqNode = async (state: SupportStateType) => {
const faqResult = await lookupFaq(state.message);
return {
answer: `Here is the relevant info from our support center:\n${faqResult}`,
};
};
const handoffNode = async (state: SupportStateType) => {
const reason =
state.intent === "medical_advice"
? "User requested medical advice"
: "Request requires restricted access or manual review";
const handoffResult = await createHumanHandoff(reason);
return {
answer: `${handoffResult}\nA support specialist will follow up.`,
};
};
const responseNode = async (state: SupportStateType) => {
const prompt = `
You are a healthcare customer support assistant.
Rules:
- Do not provide medical advice.
- Do not invent policy details.
- Keep responses short and professional.
- If uncertain, escalate.
User message: ${state.message}
Support result: ${state.answer}
`;
const result = await llm.invoke(prompt);
return {
answer: result.content.toString(),
};
};
3) Wire the graph with conditional routing
This is the core LangGraph pattern. The graph classifies first, then routes based on risk and intent.
const graphBuilder = new StateGraph(SupportState)
.addNode("classify", classifyNode)
.addNode("faq", faqNode)
.addNode("handoff", handoffNode)
.addNode("respond", responseNode)
.addEdge(START, "classify")
.addConditionalEdges("classify", (state) => {
if (state.risk === "high") return "handoff";
if (state.intent === "faq") return "faq";
return "respond";
})
;
graphBuilder.addEdge("faq", "respond");
graphBuilder.addEdge("handoff", END);
graphBuilder.addEdge("respond", END);
const app = graphBuilder.compile();
const result = await app.invoke({
message: "Can you tell me if my prior authorization was approved?",
});
console.log(result.answer);
Why this pattern works
| Concern | Pattern |
|---|---|
| Compliance | Route sensitive requests away from free-form generation |
| Auditability | Each node is explicit and traceable |
| Safety | High-risk intents hit handoff before response generation |
| Maintainability | Tool logic stays isolated from language generation |
Production Considerations
- •
Log every node transition
- •Store
intent,risk, route choice, tool calls, and final output. - •In healthcare, audit trails matter for incident review and compliance checks.
- •Store
- •
Keep PHI out of prompts unless required
- •Minimize what enters the model context.
- •If you must include PHI, redact first and use approved storage and retention policies.
- •
Respect data residency
- •Pin model endpoints and vector stores to approved regions.
- •Make sure tickets containing patient data do not cross jurisdictions that violate your policy stack.
- •
Add deterministic guardrails before generation
- •Block diagnosis requests, medication advice, identity verification shortcuts, and payment card leakage.
- •Use rules first; use the LLM second.
- •Block diagnosis requests, medication advice, identity verification shortcuts, and payment card leakage.
Common Pitfalls
- •
Using one giant prompt for everything
- •This makes debugging painful and increases compliance risk.
- •Split classification, retrieval, response writing, and escalation into separate nodes.
- •
Letting the model decide on unsafe requests without hard rules
- •If a patient asks for medical advice or account access help, do not rely on “please be careful” prompting.
- •Enforce explicit routing to human review or restricted workflows.
- •
Storing raw PHI in graph state or logs
- •Developers often pass full transcripts through every node because it is convenient.
- •Redact sensitive fields early and keep logs scoped to what auditors actually need.
If you want to extend this setup, add a retrieval node backed by an approved knowledge base and a separate verification node for identity-sensitive flows. That gives you a support agent that can answer routine questions while staying inside healthcare constraints.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit