LangGraph Tutorial (Python): connecting to PostgreSQL for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphconnecting-to-postgresql-for-intermediate-developerspython

This tutorial shows you how to persist LangGraph state in PostgreSQL using Python, so your agent can resume conversations, survive restarts, and keep durable checkpoints. You’d use this when building anything stateful: customer support agents, claims workflows, underwriting assistants, or any system where losing graph state is not acceptable.

What You'll Need

  • Python 3.10+
  • PostgreSQL 14+
  • A running PostgreSQL database with a connection string
  • langgraph
  • langchain-core
  • psycopg v3
  • Optional: python-dotenv if you want to load env vars from a .env file

Install the packages:

pip install langgraph langchain-core psycopg python-dotenv

Set a PostgreSQL connection string like this:

export POSTGRES_URI="postgresql://postgres:postgres@localhost:5432/langgraph_demo"

Step-by-Step

  1. Start by defining a minimal LangGraph workflow. The graph below increments a counter and appends messages to state, which makes it easy to verify that persistence is working across runs.
from typing import Annotated, TypedDict

from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list, add_messages]
    count: int


def increment(state: State):
    return {
        "count": state.get("count", 0) + 1,
        "messages": [HumanMessage(content=f"Run number {state.get('count', 0) + 1}")]
    }


builder = StateGraph(State)
builder.add_node("increment", increment)
builder.add_edge(START, "increment")
builder.add_edge("increment", END)
  1. Next, connect the graph to PostgreSQL using the built-in checkpoint saver. This is the key piece: LangGraph stores thread state in Postgres so the same thread_id can resume from where it left off.
import os
from langgraph.checkpoint.postgres import PostgresSaver

POSTGRES_URI = os.environ["POSTGRES_URI"]

with PostgresSaver.from_conn_string(POSTGRES_URI) as checkpointer:
    checkpointer.setup()
    graph = builder.compile(checkpointer=checkpointer)

    config = {"configurable": {"thread_id": "customer-123"}}
    result_1 = graph.invoke({"messages": [], "count": 0}, config=config)
    result_2 = graph.invoke({}, config=config)

    print(result_1)
    print(result_2)
  1. If you want this to behave like a real application, keep the same thread_id for one conversation or workflow instance. That ID is what ties a user session, claim case, or ticket to its persisted checkpoint.
from langchain_core.messages import HumanMessage

config_a = {"configurable": {"thread_id": "case-001"}}
config_b = {"configurable": {"thread_id": "case-002"}}

state_a_1 = graph.invoke(
    {"messages": [HumanMessage(content="Start case A")], "count": 0},
    config=config_a,
)

state_a_2 = graph.invoke({}, config=config_a)

state_b_1 = graph.invoke(
    {"messages": [HumanMessage(content="Start case B")], "count": 0},
    config=config_b,
)

print(state_a_1["count"], state_a_2["count"], state_b_1["count"])
  1. For production code, wrap the checkpointer in a context manager and build the graph once at startup. That avoids reconnecting on every request and keeps your service behavior predictable.
import os
from langgraph.checkpoint.postgres import PostgresSaver

POSTGRES_URI = os.environ["POSTGRES_URI"]

def build_graph():
    with PostgresSaver.from_conn_string(POSTGRES_URI) as checkpointer:
        checkpointer.setup()
        return builder.compile(checkpointer=checkpointer)

# In a real app, create this during startup.
# Keep the process alive while the graph is used.
  1. If you need to inspect stored state manually, connect to Postgres and look at the checkpoint tables created by LangGraph. This is useful when debugging why a thread resumed with unexpected data or when validating retention policies.
SELECT * 
FROM checkpoint_migrations;

SELECT *
FROM checkpoints
ORDER BY created_at DESC
LIMIT 5;

Testing It

Run the Python script twice with the same thread_id. The first invocation should create initial state; the second should load prior state from PostgreSQL and continue from there instead of starting fresh.

You should see count increase across invocations for the same thread. If you change thread_id, you should get an independent conversation or workflow history.

If nothing persists, check these first:

  • POSTGRES_URI points to the right database
  • checkpointer.setup() ran successfully
  • The process has permission to create tables
  • You are reusing the same thread_id

For deeper verification, query Postgres directly and confirm new rows are being written after each run.

Next Steps

  • Add branching logic with conditional edges so your persisted workflow does more than increment counters.
  • Store structured business data in state, then validate it with Pydantic before writing checkpoints.
  • Combine PostgreSQL persistence with streaming so your agent can emit partial results while still keeping durable state.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides