How to Integrate LangChain for healthcare with PostgreSQL for AI agents

By Cyprian AaronsUpdated 2026-04-21
langchain-for-healthcarepostgresqlai-agents

Combining LangChain for healthcare with PostgreSQL gives you a practical pattern for agent systems that need both clinical context and durable state. You use LangChain to interpret patient-facing or clinician-facing requests, then PostgreSQL to persist conversations, audit trails, retrieval metadata, and structured clinical data.

That matters when your agent needs to answer from governed data, remember prior interactions, and keep a traceable record of what it did.

Prerequisites

  • Python 3.10+
  • A running PostgreSQL instance
  • psycopg2-binary or psycopg installed
  • LangChain packages installed:
    • langchain
    • langchain-community
    • langchain-openai or your preferred model provider
  • Access to your healthcare data source or document store
  • Environment variables configured:
    • OPENAI_API_KEY or equivalent LLM key
    • DATABASE_URL for PostgreSQL
  • Basic understanding of:
    • LangChain chains/agents
    • SQL schema design
    • PHI handling and access controls

Integration Steps

  1. Install the dependencies

    Start with the packages you actually need for a production integration.

    pip install langchain langchain-community langchain-openai psycopg2-binary sqlalchemy
    

    If you plan to use vector search in PostgreSQL, add pgvector support too.

    pip install pgvector
    
  2. Create a PostgreSQL connection and schema

    Use SQLAlchemy for connection management and create tables for agent state and healthcare records metadata.

    import os
    from sqlalchemy import create_engine, text
    
    DATABASE_URL = os.environ["DATABASE_URL"]
    engine = create_engine(DATABASE_URL)
    
    with engine.begin() as conn:
        conn.execute(text("""
            CREATE TABLE IF NOT EXISTS patient_interactions (
                id SERIAL PRIMARY KEY,
                patient_id TEXT NOT NULL,
                user_query TEXT NOT NULL,
                agent_response TEXT NOT NULL,
                created_at TIMESTAMP DEFAULT NOW()
            )
        """))
    
        conn.execute(text("""
            CREATE TABLE IF NOT EXISTS clinical_documents (
                id SERIAL PRIMARY KEY,
                document_id TEXT UNIQUE NOT NULL,
                patient_id TEXT NOT NULL,
                source ტექXT NOT NULL,
                content TEXT NOT NULL,
                created_at TIMESTAMP DEFAULT NOW()
            )
        """))
    
  3. Load healthcare documents into LangChain and store metadata in PostgreSQL

    In healthcare systems, you usually separate raw content from retrieval metadata. LangChain handles the document pipeline; PostgreSQL stores what was ingested and who it belongs to.

    from langchain_core.documents import Document
    
    docs = [
        Document(
            page_content="Patient has hypertension controlled with lisinopril.",
            metadata={"document_id": "doc_001", "patient_id": "p123", "source": "discharge_summary"}
        ),
        Document(
            page_content="Follow-up recommended in 30 days after medication review.",
            metadata={"document_id": "doc_002", "patient_id": "p123", "source": "clinic_note"}
        ),
    ]
    
    with engine.begin() as conn:
        for doc in docs:
            conn.execute(
                text("""
                    INSERT INTO clinical_documents (document_id, patient_id, source, content)
                    VALUES (:document_id, :patient_id, :source, :content)
                    ON CONFLICT (document_id) DO NOTHING
                """),
                {
                    "document_id": doc.metadata["document_id"],
                    "patient_id": doc.metadata["patient_id"],
                    "source": doc.metadata["source"],
                    "content": doc.page_content,
                },
            )
    
  4. Build a LangChain chat workflow that reads from PostgreSQL

    Pull the relevant records from PostgreSQL, pass them into the prompt, and generate a response with your model provider.

    import os
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate
    
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    
    def fetch_patient_context(patient_id: str):
        with engine.begin() as conn:
            rows = conn.execute(
                text("""
                    SELECT source, content
                    FROM clinical_documents
                    WHERE patient_id = :patient_id
                    ORDER BY created_at DESC
                    LIMIT 5
                """),
                {"patient_id": patient_id},
            ).fetchall()
    
        return "\n".join([f"[{row.source}] {row.content}" for row in rows])
    
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are a healthcare assistant. Use only the provided context."),
        ("human", "Patient context:\n{context}\n\nQuestion: {question}")
    ])
    
    def answer_question(patient_id: str, question: str):
        context = fetch_patient_context(patient_id)
        chain = prompt | llm
        response = chain.invoke({"context": context, "question": question})
        return response.content
    
    print(answer_question("p123", "When should follow-up happen?"))
    
  5. Persist agent responses back into PostgreSQL

    This gives you auditability and lets downstream systems inspect prior outputs.

    def save_interaction(patient_id: str, user_query: str, agent_response: str):
        with engine.begin() as conn:
            conn.execute(
                text("""
                    INSERT INTO patient_interactions (patient_id, user_query, agent_response)
                    VALUES (:patient_id, :user_query, :agent_response)
                """),
                {
                    "patient_id": patient_id,
                    "user_query": user_query,
                    "agent_response": agent_response,
                },
            )
    
    question = "When should follow-up happen?"
    answer = answer_question("p123", question)
    save_interaction("p123", question, answer)
    

Testing the Integration

Run a simple end-to-end check: insert one document, query it through LangChain, then verify the response is stored in PostgreSQL.

test_question = "What medication is mentioned?"
test_answer = answer_question("p123", test_question)
save_interaction("p123", test_question, test_answer)

with engine.begin() as conn:
    result = conn.execute(
        text("""
            SELECT patient_id, user_query, agent_response
            FROM patient_interactions
            WHERE patient_id = :patient_id
            ORDER BY created_at DESC
            LIMIT 1
        """),
        {"patient_id": "p123"},
    ).fetchone()

print(result)

Expected output:

('p123', 'What medication is mentioned?', 'Patient has hypertension controlled with lisinopril.')

Real-World Use Cases

  • Clinical triage assistant

    • Reads recent notes from PostgreSQL-backed records.
    • Uses LangChain to summarize symptoms and route cases based on policy.
  • Prior authorization copilot

    • Pulls procedure history and supporting documentation.
    • Generates insurer-ready summaries with an auditable interaction log.
  • Care navigation agent

    • Remembers previous conversations in PostgreSQL.
    • Answers follow-up questions about appointments, medications, and discharge instructions using retrieved context.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides