How to Integrate LangGraph for healthcare with LangSmith for multi-agent systems

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-healthcarelangsmithmulti-agent-systems

Combining LangGraph for healthcare with LangSmith gives you a practical way to build regulated multi-agent workflows that are observable, testable, and easier to debug. In healthcare, that usually means routing patient intake, prior auth, triage, and care coordination through separate agents while keeping every decision traceable for review.

Prerequisites

  • Python 3.10+
  • langgraph installed
  • langsmith installed
  • A LangSmith API key
  • Access to your healthcare graph implementation in LangGraph
  • Environment variables configured:
    • LANGSMITH_API_KEY
    • LANGSMITH_TRACING=true
    • LANGSMITH_PROJECT=healthcare-multi-agent
  • A backend model provider configured for your agents, such as OpenAI or Azure OpenAI

Install the packages:

pip install langgraph langsmith langchain-openai

Integration Steps

  1. Define your healthcare agents as LangGraph nodes

    Start by modeling the workflow as a graph. In healthcare systems, this usually means separating intake, policy checks, escalation, and summarization into distinct nodes.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END

class HealthcareState(TypedDict):
    patient_message: str
    triage_result: str
    prior_auth_result: str
    summary: str

def intake_agent(state: HealthcareState):
    message = state["patient_message"]
    if "chest pain" in message.lower():
        return {"triage_result": "urgent_escalation"}
    return {"triage_result": "routine_review"}

def prior_auth_agent(state: HealthcareState):
    if state["triage_result"] == "urgent_escalation":
        return {"prior_auth_result": "skip_prior_auth_and_escalate"}
    return {"prior_auth_result": "check_coverage"}

def summary_agent(state: HealthcareState):
    return {
        "summary": (
            f"Triage={state['triage_result']}, "
            f"PriorAuth={state['prior_auth_result']}"
        )
    }

graph = StateGraph(HealthcareState)
graph.add_node("intake_agent", intake_agent)
graph.add_node("prior_auth_agent", prior_auth_agent)
graph.add_node("summary_agent", summary_agent)

graph.add_edge(START, "intake_agent")
graph.add_edge("intake_agent", "prior_auth_agent")
graph.add_edge("prior_auth_agent", "summary_agent")
graph.add_edge("summary_agent", END)

app = graph.compile()
  1. Enable LangSmith tracing for every node execution

    LangSmith works best when tracing is enabled at the process level. This gives you run history across all agents without adding custom logging everywhere.

import os

os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "healthcare-multi-agent"
os.environ["LANGSMITH_API_KEY"] = os.getenv("LANGSMITH_API_KEY")

If you’re using LangChain models inside nodes, LangSmith will automatically capture traces when the environment variables are set correctly.

  1. Wrap agent calls with LangChain models and trace them in LangSmith

    In production healthcare flows, your nodes usually call LLMs or tools. Use ChatOpenAI with with_config() so each node can be traced as a named run in LangSmith.

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def clinical_summarizer(state: HealthcareState):
    prompt = (
        "Summarize this patient workflow for a nurse reviewer:\n"
        f"Message: {state['patient_message']}\n"
        f"Triage: {state['triage_result']}\n"
        f"Prior auth: {state['prior_auth_result']}"
    )

    response = llm.invoke(
        [HumanMessage(content=prompt)],
        config={"run_name": "clinical_summarizer"}
    )

    return {"summary": response.content}

You can swap this into the graph by replacing summary_agent with clinical_summarizer.

  1. Add tool routing for multi-agent behavior

    Multi-agent systems need deterministic routing. Use conditional edges so one agent can escalate to another based on clinical risk or administrative status.

from langgraph.graph import add_conditional_edges

def route_after_intake(state: HealthcareState):
    if state["triage_result"] == "urgent_escalation":
        return "summary_agent"
    return "prior_auth_agent"

graph = StateGraph(HealthcareState)
graph.add_node("intake_agent", intake_agent)
graph.add_node("prior_auth_agent", prior_auth_agent)
graph.add_node("summary_agent", clinical_summarizer)

graph.add_edge(START, "intake_agent")
graph.add_conditional_edges(
    "intake_agent",
    route_after_intake,
    {
        "prior_auth_agent": "prior_auth_agent",
        "summary_agent": "summary_agent",
    },
)
graph.add_edge("prior_auth_agent", END)
graph.add_edge("summary_agent", END)

app = graph.compile()

This pattern is useful when one agent handles medical urgency and another handles insurance logic.

  1. Run the graph with tracing metadata for auditability

    Pass metadata into the invocation so each run is attributable to a tenant, case ID, or encounter ID. That matters when compliance teams need to reconstruct decisions later.

result = app.invoke(
    {"patient_message": "Patient reports chest pain and shortness of breath"},
    config={
        "run_name": "healthcare_multi_agent_workflow",
        "metadata": {
            "tenant_id": "hospital_001",
            "encounter_id": "enc_88421",
            "workflow_type": "triage_plus_prior_auth"
        }
    }
)

print(result)

Testing the Integration

Use a simple input that should trigger escalation and confirm the graph executes while producing a trace in LangSmith.

test_input = {
    "patient_message": "I have chest pain after walking upstairs."
}

result = app.invoke(
    test_input,
    config={
        "run_name": "integration_test_healthcare_graph",
        "metadata": {"encounter_id": "test_enc_001"}
    }
)

print(result["triage_result"])
print(result.get("prior_auth_result"))
print(result.get("summary"))

Expected output:

urgent_escalation
skip_prior_auth_and_escalate
Triage=urgent_escalation, PriorAuth=skip_prior_auth_and_escalate

In LangSmith, you should see:

  • one parent run for the workflow
  • child runs for each node invocation
  • metadata attached to the run
  • prompt/response traces for any LLM-backed node

Real-World Use Cases

  • Patient intake orchestration

    • One agent extracts symptoms.
    • Another agent checks urgency.
    • A third agent drafts a nurse-facing summary with full traceability in LangSmith.
  • Prior authorization workflows

    • Route clinical evidence extraction to one node.
    • Route payer policy lookup to another.
    • Track every decision path so denials and escalations are auditable.
  • Care coordination assistants

    • Use multiple agents for scheduling, referral generation, medication reconciliation, and follow-up reminders.
    • Debug failures quickly because every step is visible in LangSmith traces.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides