LangGraph Tutorial (Python): adding memory to agents for advanced developers
This tutorial shows you how to add persistent memory to a LangGraph agent in Python using a real checkpointer and thread-scoped state. You need this when your agent must remember prior turns across requests, not just within a single in-memory run.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-openai - •
python-dotenv - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangGraph state graphs and tool calling
Install the packages:
pip install langgraph langchain-openai python-dotenv
Step-by-Step
- •Start by defining the agent state and a simple chat model. For memory, the key idea is that the graph state includes
messages, and we’ll later attach a checkpointer so those messages persist by thread.
from typing import Annotated, Sequence, TypedDict
from dotenv import load_dotenv
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
load_dotenv()
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]
- •Build the node that calls the model. This node reads the current message history from state and returns one new assistant message, which LangGraph merges back into
messages.
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def assistant_node(state: AgentState):
response = model.invoke(state["messages"])
return {"messages": [response]}
- •Create the graph and compile it with an in-memory checkpointer. The checkpointer is what gives the graph persistence across invocations when you reuse the same
thread_id.
from langgraph.checkpoint.memory import MemorySaver
builder = StateGraph(AgentState)
builder.add_node("assistant", assistant_node)
builder.add_edge(START, "assistant")
builder.add_edge("assistant", END)
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
- •Invoke the graph twice with the same thread ID. The first call creates state, and the second call rehydrates that same conversation so the model sees prior context.
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "customer-123"}}
result1 = graph.invoke(
{"messages": [HumanMessage(content="My name is Amina. Remember it.") ]},
config=config,
)
result2 = graph.invoke(
{"messages": [HumanMessage(content="What is my name?")]},
config=config,
)
print(result2["messages"][-1].content)
- •Inspect stored state directly when you need debugging or auditability. In production systems, this is how you confirm that persistence is working before wiring it into an API or webhook handler.
snapshot = graph.get_state(config)
print("Values keys:", snapshot.values.keys())
print("Message count:", len(snapshot.values["messages"]))
for msg in snapshot.values["messages"]:
print(type(msg).__name__, "=>", msg.content)
Testing It
Run the script once and confirm that the second response reflects information from the first turn. If it answers with something like “Your name is Amina,” then your thread-scoped memory is working.
Now change thread_id to a different value and run again. You should get a fresh conversation with no prior context, which proves memory is isolated per thread.
If you want a stronger test, restart your Python process and rerun both calls with the same thread_id. With MemorySaver, persistence only lasts for the lifetime of the process, so this also shows why production deployments usually swap in SQLite, Postgres, or another durable checkpointer.
Next Steps
- •Replace
MemorySaverwith a durable checkpoint backend for real applications. - •Add tools to the agent and store tool outputs in state for richer conversational memory.
- •Learn how to trim or summarize long message histories before they hit token limits.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit