LangGraph Tutorial (Python): connecting to PostgreSQL for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphconnecting-to-postgresql-for-advanced-developerspython

This tutorial shows how to wire a LangGraph workflow to PostgreSQL so your agent can persist state, load memory, and store structured outputs across runs. You need this when a graph stops being a demo and starts handling real users, where in-memory state is not enough.

What You'll Need

  • Python 3.10+
  • A running PostgreSQL instance
  • A database URL in DATABASE_URL format
  • langgraph
  • langchain-core
  • psycopg v3 with binary extras
  • Optional: python-dotenv for local environment variables

Install the packages:

pip install langgraph langchain-core psycopg[binary] python-dotenv

Step-by-Step

  1. Start by creating a PostgreSQL connection string and a simple graph state. For production systems, keep the state small and explicit; don’t dump arbitrary objects into the checkpoint store.
import os
from typing import TypedDict, Annotated

from dotenv import load_dotenv
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

load_dotenv()

DATABASE_URL = os.environ["DATABASE_URL"]

class State(TypedDict):
    messages: Annotated[list, add_messages]

def assistant_node(state: State):
    last_message = state["messages"][-1].content
    reply = AIMessage(content=f"Stored in graph state. You said: {last_message}")
    return {"messages": [reply]}
  1. Next, create the PostgreSQL-backed checkpointer. This is what gives LangGraph durable execution and lets you resume threads by thread_id.
from langgraph.checkpoint.postgres import PostgresSaver

builder = StateGraph(State)
builder.add_node("assistant", assistant_node)
builder.add_edge(START, "assistant")
builder.add_edge("assistant", END)

with PostgresSaver.from_conn_string(DATABASE_URL) as checkpointer:
    checkpointer.setup()
    app = builder.compile(checkpointer=checkpointer)
  1. Now invoke the graph with a stable thread ID. The thread ID is the key that tells LangGraph which conversation or workflow instance should be loaded from PostgreSQL.
config = {
    "configurable": {
        "thread_id": "customer-1234"
    }
}

result = app.invoke(
    {"messages": [HumanMessage(content="Hello from PostgreSQL")]},
    config=config,
)

for message in result["messages"]:
    print(f"{message.type}: {message.content}")
  1. Run it a second time with the same thread_id to confirm persistence. You should see the previous messages loaded from PostgreSQL before the new turn is appended.
second_result = app.invoke(
    {"messages": [HumanMessage(content="What did I say before?")]},
    config=config,
)

for message in second_result["messages"]:
    print(f"{message.type}: {message.content}")
  1. If you want to inspect checkpoint data directly, query the database tables LangGraph created. This is useful when debugging stuck threads, missing state, or schema issues in staging.
import psycopg

with psycopg.connect(DATABASE_URL) as conn:
    with conn.cursor() as cur:
        cur.execute("""
            SELECT table_name
            FROM information_schema.tables
            WHERE table_schema = 'public'
            ORDER BY table_name;
        """)
        for row in cur.fetchall():
            print(row[0])

Testing It

Run the script once and confirm that the first invocation prints an AI response based on your input. Then run it again with the same thread_id; if PostgreSQL persistence is working, LangGraph will restore prior state instead of starting fresh.

If you want stronger verification, inspect the database tables after each run and confirm rows are being added for checkpoints. In a real deployment, also test concurrent threads with different thread_id values so one customer’s state never bleeds into another’s.

Next Steps

  • Add a real LLM node using ChatOpenAI or another model provider instead of the echo-style node above.
  • Store tool results and structured outputs in separate typed fields so your checkpoint data stays clean.
  • Move from single-node graphs to multi-node workflows with branching, retries, and human-in-the-loop approval steps.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides