How to Integrate LangGraph for healthcare with Redis for AI agents
Combining LangGraph for healthcare with Redis gives you two things every production healthcare agent needs: stateful orchestration and fast shared memory. LangGraph handles multi-step clinical workflows, while Redis gives you low-latency persistence, session state, caching, and event coordination across agent runs.
This setup is useful when your agent must remember patient context across turns, route between triage and documentation steps, and survive restarts without losing workflow state.
Prerequisites
- •Python 3.10+
- •A running Redis instance
- •Local:
redis-server - •Docker:
docker run -p 6379:6379 redis:7
- •Local:
- •Installed packages:
- •
langgraph - •
redis - •
langchain-core
- •
- •Access to your healthcare LLM/tooling stack
- •A clear data policy for PHI/PII before storing anything in Redis
Install the dependencies:
pip install langgraph redis langchain-core
Integration Steps
- •Set up a Redis client and verify connectivity.
Use Redis as the shared backing store for session metadata, checkpoints, or cached agent artifacts.
import os
import redis
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379/0")
r = redis.Redis.from_url(REDIS_URL, decode_responses=True)
print("PING:", r.ping())
- •Define your LangGraph state model for the healthcare workflow.
Keep the graph state small and explicit. In healthcare systems, that usually means patient identifiers, triage status, clinical notes, and routing flags.
from typing import TypedDict, Optional
from langgraph.graph import StateGraph, START, END
class HealthcareState(TypedDict):
patient_id: str
symptoms: str
triage_level: Optional[str]
note: Optional[str]
routed_to: Optional[str]
- •Add nodes that read from and write to Redis.
A common pattern is to cache prior context in Redis before each step and persist the latest workflow result after each step.
import json
def load_context(state: HealthcareState):
key = f"patient:{state['patient_id']}:context"
cached = r.get(key)
if cached:
ctx = json.loads(cached)
state["note"] = ctx.get("note")
state["triage_level"] = ctx.get("triage_level")
return state
def triage_node(state: HealthcareState):
symptoms = state["symptoms"].lower()
if any(term in symptoms for term in ["chest pain", "shortness of breath", "unconscious"]):
level = "urgent"
route = "escalation"
else:
level = "routine"
route = "documentation"
state["triage_level"] = level
state["routed_to"] = route
r.set(
f"patient:{state['patient_id']}:context",
json.dumps({"triage_level": level, "note": state.get("note")})
)
return state
def documentation_node(state: HealthcareState):
note = f"Triage completed for {state['patient_id']} with level {state['triage_level']}."
state["note"] = note
r.set(
f"patient:{state['patient_id']}:note",
note
)
return state
- •Wire the nodes into a LangGraph workflow.
Use StateGraph to define the execution path. For healthcare agents, keep escalation logic explicit instead of burying it inside prompts.
workflow = StateGraph(HealthcareState)
workflow.add_node("load_context", load_context)
workflow.add_node("triage", triage_node)
workflow.add_node("documentation", documentation_node)
workflow.add_edge(START, "load_context")
workflow.add_edge("load_context", "triage")
def route_after_triage(state: HealthcareState):
return state["routed_to"]
workflow.add_conditional_edges(
"triage",
route_after_triage,
{
"documentation": "documentation",
"escalation": END,
}
)
workflow.add_edge("documentation", END)
app = workflow.compile()
- •Run the graph and persist execution metadata in Redis.
If you need replayability or audit trails, store run IDs and timestamps alongside clinical workflow outputs.
result = app.invoke({
"patient_id": "p-1042",
"symptoms": "mild headache and fatigue",
"triage_level": None,
"note": None,
"routed_to": None,
})
r.hset(
f"patient:{result['patient_id']}:audit",
mapping={
"last_triage_level": result["triage_level"],
"last_note": result["note"],
"last_route": result["routed_to"],
}
)
print(result)
Testing the Integration
Run a simple verification that checks both graph execution and Redis persistence:
test_state = {
"patient_id": "p-9001",
"symptoms": "shortness of breath after walking upstairs",
"triage_level": None,
"note": None,
"routed_to": None,
}
out = app.invoke(test_state)
print("GRAPH OUTPUT:", out)
print("REDIS CONTEXT:", r.get("patient:p-9001:context"))
print("REDIS NOTE:", r.get("patient:p-9001:note"))
Expected output:
GRAPH OUTPUT: {'patient_id': 'p-9001', 'symptoms': 'shortness of breath after walking upstairs', 'triage_level': 'urgent', 'note': None, 'routed_to': 'escalation'}
REDIS CONTEXT: {"triage_level": "urgent", "note": null}
REDIS NOTE: None
For a routine case, you should see a note written to Redis after the documentation node runs.
Real-World Use Cases
- •
Clinical intake routing
- •Triage incoming patient messages into urgent vs routine queues.
- •Use Redis to keep conversation context across multiple agent turns.
- •
Care coordination assistants
- •Orchestrate steps like symptom capture, provider lookup, appointment scheduling, and follow-up reminders.
- •Store intermediate workflow artifacts in Redis so any worker can resume the session.
- •
Audit-friendly documentation agents
- •Generate structured visit summaries from agent steps.
- •Persist each stage of the graph in Redis for traceability and replay during review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit