How to Integrate LangGraph for healthcare with Redis for startups
Combining LangGraph for healthcare with Redis gives you a practical pattern for building stateful clinical agents that need memory, routing, and low-latency session storage. In a startup setting, this is the difference between a demo chatbot and an agent that can track patient context, persist workflow state, and recover cleanly across requests.
Prerequisites
- •Python 3.10+
- •A Redis instance running locally or in your cloud environment
- •A LangGraph healthcare project set up with your graph nodes and state schema
- •
langgraph,redis, and your LLM provider SDK installed - •Environment variables configured:
- •
REDIS_URL - •
OPENAI_API_KEYor equivalent model key
- •
- •A basic understanding of:
- •LangGraph state graphs
- •Redis key/value operations
- •Python async or sync request handling
Install the packages:
pip install langgraph redis langchain-openai pydantic
Integration Steps
- •Define the healthcare state you want to persist
For healthcare workflows, keep state explicit. You want fields like patient ID, symptom summary, triage status, and follow-up instructions.
from typing import TypedDict, Optional
class HealthcareState(TypedDict):
patient_id: str
symptoms: str
triage_level: Optional[str]
summary: Optional[str]
next_action: Optional[str]
- •Create your LangGraph workflow
Use StateGraph from LangGraph to route the patient through assessment and summarization nodes. This is where the clinical workflow logic lives.
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def assess_node(state: HealthcareState):
prompt = f"""
You are a healthcare intake assistant.
Assess this symptom report and return one of: low, medium, high.
Symptoms: {state["symptoms"]}
"""
result = llm.invoke(prompt)
return {"triage_level": result.content.strip()}
def summarize_node(state: HealthcareState):
prompt = f"""
Summarize this intake for a clinician:
Patient ID: {state["patient_id"]}
Symptoms: {state["symptoms"]}
Triage level: {state["triage_level"]}
"""
result = llm.invoke(prompt)
return {
"summary": result.content.strip(),
"next_action": "route_to_clinician" if state["triage_level"] == "high" else "self_care_instructions"
}
graph = StateGraph(HealthcareState)
graph.add_node("assess", assess_node)
graph.add_node("summarize", summarize_node)
graph.set_entry_point("assess")
graph.add_edge("assess", "summarize")
graph.add_edge("summarize", END)
app = graph.compile()
- •Add Redis as your persistence layer
Redis stores session state so your agent can resume a conversation after a restart or across stateless API calls. For startups, this is usually enough before moving to a heavier database.
import os
import json
import redis
r = redis.Redis.from_url(os.environ["REDIS_URL"], decode_responses=True)
def save_state(session_id: str, state: dict):
r.set(f"healthcare:{session_id}", json.dumps(state), ex=3600)
def load_state(session_id: str) -> dict | None:
raw = r.get(f"healthcare:{session_id}")
return json.loads(raw) if raw else None
- •Wire LangGraph execution to Redis-backed sessions
Load prior state from Redis, merge it with new input, run the graph, then persist the final output back to Redis.
def run_intake(session_id: str, patient_id: str, symptoms: str):
prior_state = load_state(session_id) or {}
initial_state = {
"patient_id": prior_state.get("patient_id", patient_id),
"symptoms": symptoms,
"triage_level": prior_state.get("triage_level"),
"summary": prior_state.get("summary"),
"next_action": prior_state.get("next_action"),
}
final_state = app.invoke(initial_state)
save_state(session_id, final_state)
return final_state
result = run_intake(
session_id="sess_123",
patient_id="pat_456",
symptoms="Chest tightness and shortness of breath for 30 minutes"
)
print(result)
- •Use Redis TTLs and key design for safe startup operations
Healthcare workflows should not keep stale context forever. Set TTLs on session keys and namespace them by tenant or environment if you have multiple customers.
def save_state(session_id: str, state: dict):
key = f"healthcare:intake:{session_id}"
r.set(key, json.dumps(state), ex=1800) # 30 minutes
def delete_state(session_id: str):
r.delete(f"healthcare:intake:{session_id}")
Testing the Integration
Run a simple end-to-end check that verifies both graph execution and Redis persistence.
test_session = "test_session_001"
output = run_intake(
session_id=test_session,
patient_id="patient_1001",
symptoms="Fever, cough, and fatigue for 3 days"
)
print("Triage:", output["triage_level"])
print("Next action:", output["next_action"])
cached = load_state(test_session)
print("Cached summary exists:", bool(cached.get("summary")))
Expected output:
Triage: medium
Next action: self_care_instructions
Cached summary exists: True
If you want stricter verification in CI, assert that the Redis key exists after invocation and that summary is not empty.
Real-World Use Cases
- •
Clinical intake assistant
- •Collect symptoms from patients
- •Triage urgency with LangGraph routing logic
- •Persist conversation state in Redis between API calls
- •
Prior authorization workflow agent
- •Track required documents and missing fields
- •Store case progress per member or claim ID
- •Resume processing when new documents arrive
- •
Post-discharge follow-up agent
- •Remind patients about medication adherence or follow-up appointments
- •Keep recent discharge context in Redis with TTL-based expiry
- •Escalate high-risk responses to care teams using graph branches
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit