LangChain Tutorial (Python): adding memory to agents for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-memory-to-agents-for-advanced-developerspython

This tutorial shows how to add durable conversation memory to a LangChain agent in Python, so the agent can remember prior turns and use that context in later tool calls. You need this when a stateless agent keeps forgetting user preferences, prior decisions, or case details between messages.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • langchain-community
  • An OpenAI API key set as OPENAI_API_KEY
  • A shell with pip available
  • Basic familiarity with LangChain agents and tools

Install the packages:

pip install -U langchain langchain-openai langchain-community

Step-by-Step

  1. Start with a simple tool-enabled agent.
    We’ll use a calculator-style tool so you can see how memory affects follow-up questions instead of just single-turn answers.
import os
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "")

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two integers."""
    return a * b

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [multiply]
  1. Add conversation state using ConversationBufferMemory.
    This stores the running chat history in memory and lets the agent see prior user messages when generating the next response.
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True,
)
  1. Build an agent prompt that includes history explicitly.
    For advanced work, don’t rely on magic defaults; wire the memory into the prompt so the model gets the exact context you expect.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a precise assistant that uses tools when needed."),
    MessagesPlaceholder(variable_name="chat_history"),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])
  1. Create the agent and wrap it in an executor with memory integration.
    The executor handles tool calls, stores messages, and feeds history back into later turns.
from langchain.agents import create_tool_calling_agent, AgentExecutor

agent = create_tool_calling_agent(llm, tools, prompt)

executor = AgentExecutor(
    agent=agent,
    tools=tools,
    memory=memory,
    verbose=True,
)
  1. Run multiple turns and watch the agent remember context.
    The second question depends on the first one, which is where memory becomes useful.
response1 = executor.invoke({"input": "My order number is 42."})
print(response1["output"])

response2 = executor.invoke({"input": "Multiply it by 3."})
print(response2["output"])
  1. Inspect what was stored in memory after execution.
    In production, this is useful for debugging missing context, prompt drift, or unexpected overwrites.
for message in memory.chat_memory.messages:
    print(f"{message.__class__.__name__}: {message.content}")

Testing It

Run the script and make sure the first response acknowledges the order number or at least preserves it in chat history. Then send the follow-up prompt “Multiply it by 3” and confirm the agent uses 42 from prior context instead of asking you to repeat it.

If you want a stronger test, replace the second prompt with something ambiguous like “What’s that times 3?” and verify the answer still resolves to 126. If it doesn’t, your prompt is not carrying history correctly or your model is not seeing chat_history.

For debugging, keep verbose=True on AgentExecutor so you can inspect tool calls and intermediate reasoning steps. That’s usually enough to catch broken message wiring before you ship.

Next Steps

  • Replace ConversationBufferMemory with ConversationSummaryMemory when chat logs get too large for your context window.
  • Persist memory to Redis or Postgres if you need session continuity across processes or deployments.
  • Add structured session IDs so each customer conversation gets isolated memory instead of one global buffer.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides