LangChain Tutorial (Python): persisting agent state for beginners
This tutorial shows you how to persist LangChain agent state in Python so a conversation can survive process restarts. You need this when your agent is handling multi-turn support, claims intake, or any workflow where losing memory means losing context.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
langgraph - •An OpenAI API key set as
OPENAI_API_KEY - •A local SQLite database file for persistence
- •Basic familiarity with LangChain agents and tool calling
Install the packages:
pip install langchain langchain-openai langgraph
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Create a simple agent with a tool and a persistent checkpointer.
The important part here is the checkpointer. It stores the graph state after each step, so when you call the agent again with the same thread ID, it resumes from the previous conversation instead of starting over.
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.prebuilt import create_react_agent
@tool
def lookup_policy(policy_id: str) -> str:
"""Return mock policy details for a given policy ID."""
return f"Policy {policy_id}: active, premium paid, renewal due in 30 days"
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [lookup_policy]
checkpointer = SqliteSaver.from_conn_string("agent_state.db")
agent = create_react_agent(llm, tools, checkpointer=checkpointer)
- •Run the agent with a thread ID so LangGraph knows which conversation to resume.
The thread_id is the key that ties all turns together. If you reuse it later, the agent loads prior state from SQLite and continues from there.
config = {"configurable": {"thread_id": "customer-123"}}
result1 = agent.invoke(
{"messages": [("user", "Hi, I need help with policy 98765")]},
config=config,
)
print(result1["messages"][-1].content)
- •Send a follow-up message using the same thread ID.
This is where persistence becomes obvious. The second call does not need you to resend the full history; it picks up from stored state and keeps the context intact.
result2 = agent.invoke(
{"messages": [("user", "What is its renewal status?")]},
config=config,
)
print(result2["messages"][-1].content)
- •Inspect the stored state directly to confirm it was written to disk.
A good habit in production is verifying persistence at the storage layer, not just trusting the chat output. Here we query SQLite and confirm that LangGraph wrote checkpoint rows for our thread.
import sqlite3
conn = sqlite3.connect("agent_state.db")
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print("Tables:", tables)
cursor.execute("SELECT COUNT(*) FROM checkpoints;")
count = cursor.fetchone()[0]
print("Checkpoint rows:", count)
conn.close()
- •Simulate a restart and load the same conversation again.
If persistence works, recreating the agent should not lose memory as long as you keep the same database file and thread ID. This is the pattern you want in real services where workers restart or scale horizontally.
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.prebuilt import create_react_agent
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
checkpointer = SqliteSaver.from_conn_string("agent_state.db")
agent_after_restart = create_react_agent(llm, [lookup_policy], checkpointer=checkpointer)
result3 = agent_after_restart.invoke(
{"messages": [("user", "Summarize everything you know about my policy")]},
config={"configurable": {"thread_id": "customer-123"}},
)
print(result3["messages"][-1].content)
Testing It
Run the script once and make sure the first response mentions your policy lookup tool output. Then run it again with the same thread_id; the follow-up question should behave like part of the same conversation instead of a fresh chat.
Next, delete only your Python process and rerun it against the same agent_state.db file. If persistence is working, the final summary request should still have access to earlier turns.
If you want a stronger test, change the thread_id to something new like customer-456. You should see a clean conversation with no prior context carried over.
Next Steps
- •Add structured state for business fields like
customer_id,policy_number, andcase_status - •Swap SQLite for Postgres when you need multi-instance production deployments
- •Learn how to use LangGraph reducers so agents can merge state safely across steps
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit