LangChain Tutorial (Python): adding memory to agents for intermediate developers
This tutorial shows you how to give a LangChain agent short-term memory in Python so it can remember earlier turns in the same conversation. You need this when your agent has to answer follow-up questions, keep context across tool calls, or avoid asking the user for the same information twice.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
openai - •An OpenAI API key set as
OPENAI_API_KEY - •A basic understanding of LangChain agents and tools
- •A terminal and a virtual environment
Install the packages:
pip install langchain langchain-openai openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start with a simple tool and model setup. The agent will use a calculator tool so you can see memory working across multiple turns, not just plain chat.
import os
from langchain_openai import ChatOpenAI
from langchain.tools import tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers."""
return a * b
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [multiply]
- •Add conversation memory with
ConversationBufferMemory. This stores prior messages in a buffer and exposes them to the agent through a memory key.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
- •Build an agent that can read that memory. The important part is wiring the prompt, memory, and executor together so each turn includes previous messages.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor, create_tool_calling_agent
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
)
- •Run multiple turns against the same executor. If memory is wired correctly, the second question can refer to information from the first question without restating it.
response1 = executor.invoke({"input": "My name is Jordan. Remember that."})
print(response1["output"])
response2 = executor.invoke({"input": "What is my name?"})
print(response2["output"])
response3 = executor.invoke({"input": "What is 12 times 8?"})
print(response3["output"])
- •If you want to inspect what got stored, print the memory variables directly. This is useful when debugging why an agent seems to forget context.
history = memory.load_memory_variables({})
print(history["chat_history"])
Testing It
Run the script and watch the second response. If memory is working, the agent should answer that your name is Jordan without you repeating it. Then ask a follow-up like “What did I tell you before?” to confirm it can retrieve earlier context from the same session.
If the agent forgets everything between calls, check three things first: return_messages=True, memory_key="chat_history", and that your prompt includes MessagesPlaceholder(variable_name="chat_history"). Those three pieces have to line up exactly or LangChain won’t inject history into the prompt.
If you get tool-related errors, make sure your tool function has type hints and a docstring. LangChain uses both to build tool schemas for tool-calling agents.
Next Steps
- •Swap
ConversationBufferMemoryforConversationSummaryMemorywhen conversations get long. - •Persist memory per user by storing message history in Redis or a database.
- •Add structured tools and test how memory behaves across multi-step workflows with external APIs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit