LangGraph Tutorial (Python): adding memory to agents for beginners

By Cyprian AaronsUpdated 2026-04-21
langgraphadding-memory-to-agents-for-beginnerspython

This tutorial shows you how to add persistent memory to a LangGraph agent in Python using checkpointer and thread_id. You need this when you want an agent to remember prior turns across requests instead of acting like every message is the first one.

What You'll Need

  • Python 3.10+
  • A virtual environment
  • langgraph
  • langchain-openai
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with LangGraph state graphs and nodes

Install the packages:

pip install langgraph langchain-openai

Step-by-Step

  1. Start with a minimal graph state that stores chat messages.
    For memory, the important part is that your state includes a messages field annotated with add_messages, so LangGraph knows how to append new turns instead of replacing them.
from typing import Annotated, TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI


class State(TypedDict):
    messages: Annotated[list, add_messages]


llm = ChatOpenAI(model="gpt-4o-mini")
  1. Create a node that calls the model with the current conversation.
    The node takes the full state, sends the messages to the LLM, and returns the assistant response in the same message format LangGraph expects.
def chat_node(state: State):
    response = llm.invoke(state["messages"])
    return {"messages": [response]}


graph = StateGraph(State)
graph.add_node("chat", chat_node)
graph.add_edge(START, "chat")
graph.add_edge("chat", END)
  1. Add a checkpointer so LangGraph can persist state between runs.
    This is the actual memory layer. For beginners, an in-memory checkpointer is enough to understand the pattern before moving to Redis or a database-backed store.
from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)
  1. Run the graph with a stable thread_id.
    The thread_id is how LangGraph knows which conversation to resume. If you reuse the same ID, the agent gets its previous messages back automatically.
config = {"configurable": {"thread_id": "user-123"}}

result1 = app.invoke(
    {"messages": [("user", "My name is Priya and I work in claims.")]},
    config=config,
)

result2 = app.invoke(
    {"messages": [("user", "What do I do for work?")]},
    config=config,
)

print(result2["messages"][-1].content)
  1. Inspect what LangGraph stored for that thread.
    This helps confirm that memory is really being persisted and not just appearing by chance inside one function call.
snapshot = app.get_state(config)

for message in snapshot.values["messages"]:
    role = getattr(message, "type", "unknown")
    content = getattr(message, "content", str(message))
    print(f"{role}: {content}")

Testing It

Run the script twice with the same thread_id and ask follow-up questions on the second turn. The assistant should answer using facts from the first turn, like your name or job role.

If you change the thread_id, you should get a fresh conversation with no prior context. That’s your proof that memory is scoped per thread, not global across all users.

If you want a quick sanity check, print the stored state after each invocation and confirm both user and assistant messages are present. In production, this same pattern maps cleanly to per-user or per-session conversation history.

Next Steps

  • Replace MemorySaver with a persistent backend such as SQLite or Postgres for real applications.
  • Add summarization logic so long conversations don’t grow without bound.
  • Store structured business data in state alongside messages, such as customer ID or claim status.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides