How to Integrate LangChain for healthcare with PostgreSQL for multi-agent systems
Combining LangChain for healthcare with PostgreSQL gives you a clean way to build agent systems that can reason over clinical workflows while keeping state, audit trails, and patient-context data in a relational store. In practice, this is what you want for triage assistants, care coordination agents, prior-auth automation, and multi-agent systems where one agent retrieves policy or patient context and another agent drafts the next action.
Prerequisites
- •Python 3.10+
- •A running PostgreSQL 14+ instance
- •
psycopg2-binaryorpsycopginstalled - •LangChain packages for your healthcare stack:
- •
langchain - •
langchain-openaior your model provider - •any healthcare-specific LangChain integration you use for clinical tools/retrieval
- •
- •A valid LLM API key in environment variables
- •A PostgreSQL database/schema ready for:
- •agent state
- •conversation history
- •patient-safe metadata
- •audit logs
Install the core dependencies:
pip install langchain langchain-openai psycopg2-binary sqlalchemy python-dotenv
Integration Steps
1) Set up PostgreSQL connection details
Start by centralizing your database config. For multi-agent systems, keep separate tables for conversations, tasks, and tool outputs so agents don’t overwrite each other’s state.
import os
from dotenv import load_dotenv
load_dotenv()
POSTGRES_URI = os.getenv(
"POSTGRES_URI",
"postgresql+psycopg2://agent_user:agent_pass@localhost:5432/health_agents"
)
If you’re using SQLAlchemy-backed persistence, this URI is enough to create engines and session factories across your agents.
2) Create the PostgreSQL schema for agent memory and audit logs
For healthcare workflows, don’t store raw PHI unless you have a clear compliance story. Store structured references, timestamps, agent names, and redacted summaries.
from sqlalchemy import create_engine, text
engine = create_engine(POSTGRES_URI)
schema_sql = """
CREATE TABLE IF NOT EXISTS agent_messages (
id SERIAL PRIMARY KEY,
session_id VARCHAR(64) NOT NULL,
agent_name VARCHAR(64) NOT NULL,
role VARCHAR(32) NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS agent_tasks (
id SERIAL PRIMARY KEY,
session_id VARCHAR(64) NOT NULL,
task_name VARCHAR(128) NOT NULL,
status VARCHAR(32) NOT NULL DEFAULT 'pending',
payload JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
"""
with engine.begin() as conn:
conn.execute(text(schema_sql))
This gives each agent a durable trail. In production, add row-level security and encryption at rest if the table contains regulated data.
3) Wire LangChain into the workflow and persist messages to PostgreSQL
Use LangChain’s chat model interface to generate responses, then write both the user input and model output into PostgreSQL. For multi-agent systems, this pattern becomes the shared memory layer between agents.
from langchain_openai import ChatOpenAI
from sqlalchemy import text
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def save_message(session_id: str, agent_name: str, role: str, content: str):
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_messages (session_id, agent_name, role, content)
VALUES (:session_id, :agent_name, :role, :content)
"""),
{
"session_id": session_id,
"agent_name": agent_name,
"role": role,
"content": content,
}
)
def get_session_messages(session_id: str):
with engine.begin() as conn:
rows = conn.execute(
text("""
SELECT role, content
FROM agent_messages
WHERE session_id = :session_id
ORDER BY created_at ASC
"""),
{"session_id": session_id}
).fetchall()
return rows
session_id = "triage-001"
user_input = "Patient reports chest pain and shortness of breath."
save_message(session_id, "triage_agent", "user", user_input)
messages = get_session_messages(session_id)
prompt = "\n".join([f"{r.role}: {r.content}" for r in messages])
response = llm.invoke(prompt)
save_message(session_id, "triage_agent", "assistant", response.content)
print(response.content)
This is the simplest reliable pattern: LangChain handles generation; PostgreSQL handles persistence. It scales better than keeping state in process memory when multiple agents need the same context.
4) Add a second agent that reads shared state from PostgreSQL
A multi-agent system usually needs one agent to classify intent and another to take action. Here’s a second agent that reads task state from Postgres and produces an escalation recommendation.
def create_task(session_id: str, task_name: str, payload: dict):
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_tasks (session_id, task_name, status, payload)
VALUES (:session_id, :task_name, 'pending', :payload::jsonb)
"""),
{
"session_id": session_id,
"task_name": task_name,
"payload": __import__("json").dumps(payload),
}
)
def fetch_pending_tasks(session_id: str):
with engine.begin() as conn:
return conn.execute(
text("""
SELECT id, task_name, payload
FROM agent_tasks
WHERE session_id = :session_id AND status = 'pending'
ORDER BY created_at ASC
"""),
{"session_id": session_id}
).fetchall()
create_task(
session_id="triage-001",
task_name="escalate_to_nurse",
payload={"symptoms": ["chest pain", "shortness of breath"], "priority": "high"}
)
tasks = fetch_pending_tasks("triage-001")
for task in tasks:
task_prompt = f"""
You are a healthcare escalation agent.
Task: {task.task_name}
Payload: {task.payload}
Return a concise recommendation.
"""
result = llm.invoke(task_prompt)
print(result.content)
That pattern keeps each agent focused on one job while PostgreSQL acts as the shared coordination layer.
5) Use LangChain tool calling with PostgreSQL-backed tools
If your workflow uses tool-calling agents, wrap Postgres operations as tools so the model can query or update state explicitly.
from langchain_core.tools import tool
@tool
def get_latest_patient_summary(session_id: str) -> str:
"""Fetch the latest redacted patient summary from PostgreSQL."""
with engine.begin() as conn:
row = conn.execute(
text("""
SELECT content
FROM agent_messages
WHERE session_id = :session_id AND role IN ('user', 'assistant')
ORDER BY created_at DESC
LIMIT 1
"""),
{"session_id": session_id}
).fetchone()
return row[0] if row else "No summary found."
tools = [get_latest_patient_summary]
agent_llm = llm.bind_tools(tools)
tool_result = get_latest_patient_summary.invoke({"session_id": "triage-001"})
print(tool_result)
In a real deployment you’d pair this with an AgentExecutor or graph-based orchestration layer. The important part is that all shared state remains in Postgres instead of scattered across services.
Testing the Integration
Run a basic end-to-end check: write a message to Postgres, read it back through your workflow code, and confirm LangChain returns a response.
test_session = "test-123"
save_message(test_session, "tester", "user", "Generate a brief triage note for headache and nausea.")
history = get_session_messages(test_session)
assert len(history) > 0
prompt_text = "\n".join([f"{r.role}: {r.content}" for r in history])
result = llm.invoke(prompt_text)
print("DB rows:", len(history))
print("LLM output:", result.content[:120])
Expected output:
DB rows: 1
LLM output: ...
If that passes, your database persistence and LangChain invocation path are both working.
Real-World Use Cases
- •
Clinical triage routing
- •One agent collects symptoms.
- •Another applies routing rules.
- •PostgreSQL stores the decision trail for review and auditing.
- •
Prior authorization assistants
- •One agent extracts procedure codes.
- •Another checks payer requirements stored in Postgres.
- •A third drafts the submission packet.
- •
Care coordination systems
- •Agents share patient-safe summaries through PostgreSQL.
- •One schedules follow-up tasks.
- •Another monitors unresolved items across sessions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit