LangGraph Tutorial (Python): persisting agent state for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphpersisting-agent-state-for-intermediate-developerspython

This tutorial shows you how to persist LangGraph agent state in Python so a conversation can stop, restart, and continue without losing context. You need this when your agent runs across multiple requests, background jobs, or user sessions and you cannot keep everything in memory.

What You'll Need

  • Python 3.10+
  • langgraph
  • langchain-openai
  • python-dotenv
  • An OpenAI API key in OPENAI_API_KEY
  • A basic understanding of LangGraph nodes, edges, and state
  • A terminal and a Python virtual environment

Install the packages:

pip install langgraph langchain-openai python-dotenv

Step-by-Step

  1. Start with a simple state schema and a graph that uses a checkpointer.

    The key idea is that LangGraph stores state by thread_id. If you pass the same thread_id later, the graph reloads the previous state instead of starting over.

from typing import Annotated, TypedDict
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver

load_dotenv()

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

llm = ChatOpenAI(model="gpt-4o-mini")
checkpointer = MemorySaver()
graph_builder = StateGraph(AgentState)
  1. Add a node that calls the model and returns the next message.

    This node reads the current message history from state, sends it to the model, and appends the response back into the same state. That gives you a persistent chat loop once the checkpointer is attached.

def chat_node(state: AgentState):
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

graph_builder.add_node("chat", chat_node)
graph_builder.add_edge(START, "chat")
graph_builder.add_edge("chat", END)
app = graph_builder.compile(checkpointer=checkpointer)
  1. Run the graph with a thread_id, then run it again with the same thread_id.

    The first call creates persisted state for that thread. The second call picks up where the first one left off because LangGraph restores messages from the checkpointer.

config = {"configurable": {"thread_id": "customer-123"}}

result1 = app.invoke(
    {"messages": [("user", "My name is Sam. Remember that.") ]},
    config=config,
)

result2 = app.invoke(
    {"messages": [("user", "What is my name?")]},
    config=config,
)

print(result1["messages"][-1].content)
print(result2["messages"][-1].content)
  1. Inspect persisted state directly before and after each turn.

    This is useful when debugging production flows. You can confirm that the graph is storing exactly what you expect between requests.

state_before = app.get_state(config)
print("Before:", len(state_before.values.get("messages", [])))

app.invoke(
    {"messages": [("user", "I also work at Acme Bank.") ]},
    config=config,
)

state_after = app.get_state(config)
print("After:", len(state_after.values.get("messages", [])))

for msg in state_after.values["messages"]:
    print(type(msg).__name__, "=>", msg.content)
  1. Replace in-memory storage with a durable backend when you move beyond local testing.

    MemorySaver is fine for demos and unit tests, but it disappears when your process exits. For real deployments, use a persistent checkpointer such as SQLite or Postgres so agent state survives restarts.

from langgraph.checkpoint.sqlite import SqliteSaver

with SqliteSaver.from_conn_string("checkpoints.sqlite") as checkpointer:
    graph_builder = StateGraph(AgentState)
    
    def chat_node(state: AgentState):
        response = llm.invoke(state["messages"])
        return {"messages": [response]}

    graph_builder.add_node("chat", chat_node)
    graph_builder.add_edge(START, "chat")
    graph_builder.add_edge("chat", END)

    app = graph_builder.compile(checkpointer=checkpointer)

    config = {"configurable": {"thread_id": "policy-8842"}}
    out = app.invoke({"messages": [("user", "Persist this conversation.") ]}, config=config)
    print(out["messages"][-1].content)

Testing It

Run the script once with a thread ID like customer-123, then run it again with a new user message using the same ID. If persistence is working, app.get_state(config) will show the full message history from both turns instead of only the latest input.

A good test is to stop the Python process entirely and restart it with the same durable backend. If you use SQLite or another persistent saver, the conversation should still be there after restart.

Also test isolation by changing only the thread_id. You should get a separate conversation history for each thread, which is what you want for multi-user systems.

Next Steps

  • Add structured state fields like customer_profile, risk_flags, or workflow_step alongside messages
  • Learn how to use conditional edges so persisted state can drive multi-step workflows
  • Swap SQLite for Postgres if you need shared persistence across multiple application instances

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides