How to Integrate LangGraph for healthcare with LangSmith for startups

By Cyprian AaronsUpdated 2026-04-22
langgraph-for-healthcarelangsmithstartups

If you’re building healthcare agents for a startup, you need two things at the same time: workflow control and traceability. LangGraph gives you deterministic agent orchestration for clinical and admin flows, while LangSmith gives you observability, debugging, and evaluation so you can prove what the system did and why.

The useful pattern is simple: run your healthcare workflow in LangGraph, instrument every step with LangSmith, then use traces to debug failures, measure latency, and validate outputs before shipping to production.

Prerequisites

  • Python 3.10+
  • A LangChain/LangGraph project installed
  • Access to langgraph and langsmith packages
  • API keys configured:
    • LANGSMITH_API_KEY
    • LANGSMITH_TRACING=true
    • LANGSMITH_PROJECT=healthcare-startup-agent
  • A model provider configured through LangChain, such as OpenAI or Anthropic
  • Basic familiarity with:
    • StateGraph
    • RunnableConfig
    • LangSmith tracing decorators or client calls

Install the packages:

pip install langgraph langchain langsmith langchain-openai

Integration Steps

  1. Set up environment variables for tracing

LangSmith traces LangGraph runs automatically when tracing is enabled. For a startup team, this is the fastest way to get visibility into every node execution without adding custom logging everywhere.

import os

os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_API_KEY"] = "lsv2-your-api-key"
os.environ["LANGSMITH_PROJECT"] = "healthcare-startup-agent"
os.environ["OPENAI_API_KEY"] = "sk-your-openai-key"
  1. Build a healthcare workflow with LangGraph

A good healthcare agent flow usually has explicit steps: intake, triage, policy check, and response drafting. Use StateGraph so each step is isolated and easy to inspect in LangSmith.

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

class PatientState(TypedDict):
    symptoms: str
    triage_level: str
    response: str

def intake_node(state: PatientState):
    return state

def triage_node(state: PatientState):
    prompt = f"Classify urgency for symptoms: {state['symptoms']}. Return only one of: low, medium, high."
    result = llm.invoke(prompt)
    return {"triage_level": result.content.strip().lower()}

def response_node(state: PatientState):
    prompt = (
        f"Symptoms: {state['symptoms']}\n"
        f"Triage: {state['triage_level']}\n"
        "Write a safe patient-facing next step message. Avoid diagnosis."
    )
    result = llm.invoke(prompt)
    return {"response": result.content}

graph = StateGraph(PatientState)
graph.add_node("intake", intake_node)
graph.add_node("triage", triage_node)
graph.add_node("response", response_node)

graph.add_edge(START, "intake")
graph.add_edge("intake", "triage")
graph.add_edge("triage", "response")
graph.add_edge("response", END)

app = graph.compile()
  1. Attach LangSmith tracing to the graph run

LangGraph runs are traced when the environment variables are set, but in production I still pass metadata through RunnableConfig. That gives you patient-safe session grouping without stuffing PHI into trace names.

from langchain_core.runnables import RunnableConfig

config = RunnableConfig(
    tags=["healthcare", "startup", "triage"],
    metadata={
        "tenant": "clinic-alpha",
        "workflow": "patient-intake-v1",
        "source": "web-app"
    }
)

result = app.invoke(
    {"symptoms": "chest tightness and shortness of breath"},
    config=config
)

print(result)
  1. Use LangSmith client calls for explicit trace control

If you want finer-grained tracking around business-critical actions like escalation or handoff to a nurse queue, use the LangSmith SDK directly. This is useful when part of your system sits outside the graph.

from langsmith import Client

client = Client()

run = client.create_run(
    name="manual-escalation-check",
    run_type="chain",
    inputs={"triage_level": "high"},
    project_name="healthcare-startup-agent",
)

client.update_run(
    run_id=run.id,
    outputs={"action": "escalate_to_clinician"}
)
  1. Add a guardrail node for startup-grade safety

Healthcare startups need hard stops around unsafe output. Put validation inside the graph so bad generations fail before they reach users, and show up clearly in LangSmith traces.

def safety_check_node(state: PatientState):
    if state["triage_level"] == "high":
        return {
            "response": (
                "This may require urgent medical attention. "
                "Please contact emergency services or go to the nearest ER now."
            )
        }
    return state

graph2 = StateGraph(PatientState)
graph2.add_node("intake", intake_node)
graph2.add_node("triage", triage_node)
graph2.add_node("safety_check", safety_check_node)
graph2.add_node("response", response_node)

graph2.add_edge(START, "intake")
graph2.add_edge("intake", "triage")
graph2.add_edge("triage", "safety_check")
graph2.add_edge("safety_check", END)

app2 = graph2.compile()

Testing the Integration

Run a real invocation and confirm it appears in LangSmith under your project.

test_input = {"symptoms": "persistent fever and mild headache"}
output = app.invoke(test_input, config=config)

print("Triage:", output.get("triage_level"))
print("Response:", output.get("response"))

Expected output:

Triage: low
Response: Please monitor your symptoms and consider booking a clinician review if they worsen.

In LangSmith, you should see:

  • A project named healthcare-startup-agent
  • A trace for the graph invocation
  • Node-level spans for intake, triage, and response
  • Inputs/outputs attached to each step

Real-World Use Cases

  • Patient intake assistant

    • Collect symptoms, classify urgency, and route patients to self-care or escalation paths.
    • Use LangSmith traces to review false positives in triage decisions.
  • Prior authorization copilot

    • Orchestrate document collection, policy checks, and status updates with LangGraph.
    • Track latency per step in LangSmith so ops teams can spot bottlenecks.
  • Clinical support QA agent

    • Run retrieval-backed answers through a controlled graph with validation nodes.
    • Use LangSmith evaluations to compare prompt versions before rollout.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides