LangGraph Tutorial (Python): persisting agent state for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphpersisting-agent-state-for-advanced-developerspython

This tutorial shows how to persist LangGraph agent state in Python so conversations survive process restarts, retries, and multi-step workflows. You need this when your agent is doing real work: support cases, insurance claims, bank onboarding, or any flow where losing state means losing context and money.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-openai
  • python-dotenv
  • An OpenAI API key in OPENAI_API_KEY
  • A local Postgres instance if you want durable persistence across restarts
  • Optional but recommended:
    • psycopg[binary]
    • langgraph-checkpoint-postgres

Step-by-Step

  1. Install the packages and set up your environment.
    For local development, start with a SQLite-backed checkpointer if you want zero infrastructure, then move to Postgres for production durability.
pip install langgraph langchain-openai python-dotenv
  1. Build a graph with explicit state and a checkpoint saver.
    The important part is the checkpointer: without it, LangGraph runs statelessly and every invocation starts fresh.
import os
from typing import Annotated

from dotenv import load_dotenv
from typing_extensions import TypedDict

from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages

load_dotenv()

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
checkpointer = MemorySaver()
  1. Define a node that reads the current state and appends the model response.
    This is standard LangGraph: the node returns a partial state update, and LangGraph merges it into the persisted thread state.
def chat_node(state: AgentState):
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

builder = StateGraph(AgentState)
builder.add_node("chat", chat_node)
builder.add_edge(START, "chat")
graph = builder.compile(checkpointer=checkpointer)
  1. Invoke the graph with a stable thread_id.
    This is the piece most people miss. The same thread_id tells LangGraph which conversation to resume from when you call the graph again.
config = {"configurable": {"thread_id": "customer-123"}}

result1 = graph.invoke(
    {"messages": [("user", "My policy number is 77821. Save that.") ]},
    config=config,
)

result2 = graph.invoke(
    {"messages": [("user", "What policy number did I give you?")]},
    config=config,
)

print(result2["messages"][-1].content)
  1. Inspect persisted state directly before building more advanced flows.
    In production, this is how you debug what the agent knows at each step, especially when multiple tools or human approvals are involved.
snapshot = graph.get_state(config)
print(snapshot.values["messages"][-1].content)

for msg in snapshot.values["messages"]:
    print(f"{msg.type}: {msg.content}")
  1. Swap memory persistence for durable persistence when you move beyond local testing.
    MemorySaver is fine for demos and unit tests, but it disappears on restart. For real systems, use a database-backed checkpointer.
# pip install psycopg[binary] langgraph-checkpoint-postgres

import os
from langgraph.checkpoint.postgres import PostgresSaver

DB_URI = os.environ["POSTGRES_URI"]

with PostgresSaver.from_conn_string(DB_URI) as checkpointer:
    checkpointer.setup()

    graph = builder.compile(checkpointer=checkpointer)

    config = {"configurable": {"thread_id": "claim-8891"}}
    out = graph.invoke(
        {"messages": [("user", "Start my claim and remember my address as 12 King Street.") ]},
        config=config,
    )
    print(out["messages"][-1].content)

Testing It

Run the script twice with the same thread_id. On the second run, ask something that depends on prior context and confirm the model can answer from persisted state instead of acting like it’s a new session.

If you’re using MemorySaver, restart the Python process and verify that state is gone. Then switch to Postgres and confirm the same thread survives process restarts.

A good sanity check is to print graph.get_state(config) after each invocation. If messages accumulate as expected, your checkpointing setup is working.

Next Steps

  • Add tool nodes and persist intermediate tool results across turns.
  • Move from message-only state to structured state fields like customer_id, case_status, and risk_flags.
  • Learn LangGraph interrupts so humans can approve or edit persisted agent state before execution continues.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides