AutoGen Tutorial (Python): adding memory to agents for beginners
This tutorial shows you how to give an AutoGen agent persistent memory in Python using a simple file-backed store. You need this when your agent has to remember user preferences, prior decisions, or case context across multiple runs instead of starting from zero every time.
What You'll Need
- •Python 3.10+
- •
pyautogeninstalled - •An OpenAI API key
- •A local project folder where the agent can write memory files
- •Basic familiarity with AutoGen
AssistantAgentandUserProxyAgent
Install the package:
pip install pyautogen
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start by creating a small memory store. For beginners, a JSON file is enough because it is easy to inspect and debug. In production, you would replace this with Redis, Postgres, or a vector store.
import json
from pathlib import Path
MEMORY_FILE = Path("agent_memory.json")
def load_memory() -> dict:
if MEMORY_FILE.exists():
return json.loads(MEMORY_FILE.read_text())
return {}
def save_memory(memory: dict) -> None:
MEMORY_FILE.write_text(json.dumps(memory, indent=2))
- •Next, load your AutoGen config and create the assistant. This uses the standard
config_list_from_jsonhelper so your model settings stay outside your code.
import autogen
config_list = autogen.config_list_from_json(
env_or_file="OAI_CONFIG_LIST",
)
llm_config = {
"config_list": config_list,
"temperature": 0,
}
assistant = autogen.AssistantAgent(
name="memory_assistant",
llm_config=llm_config,
)
- •Now add a function that injects memory into the prompt before each run. This is the simplest form of memory: retrieve stored facts, then prepend them to the user request so the model can use them.
def build_prompt(user_message: str, memory: dict) -> str:
lines = ["You are a helpful assistant."]
if memory:
lines.append("Known memory:")
for key, value in memory.items():
lines.append(f"- {key}: {value}")
lines.append("")
lines.append(f"User message: {user_message}")
return "\n".join(lines)
- •Wire the memory into a chat loop and persist updates after each turn. Here we store one simple fact from the conversation: the user's preferred name.
memory = load_memory()
user_message = "My name is Sam. Remember that for next time."
prompt = build_prompt(user_message, memory)
response = assistant.generate_reply(messages=[{"role": "user", "content": prompt}])
print(response)
memory["preferred_name"] = "Sam"
save_memory(memory)
- •To make this useful across sessions, reload the file on startup and reuse it in later prompts. On the second run, the agent will see the saved context without you manually re-entering it.
memory = load_memory()
user_message = "What is my name?"
prompt = build_prompt(user_message, memory)
response = assistant.generate_reply(messages=[{"role": "user", "content": prompt}])
print(response)
- •If you want a slightly better beginner setup, wrap this into a small helper class. That keeps retrieval and persistence in one place and makes it easier to swap out JSON later.
class SimpleMemoryStore:
def __init__(self, path: str = "agent_memory.json"):
self.path = Path(path)
def load(self) -> dict:
if self.path.exists():
return json.loads(self.path.read_text())
return {}
def save(self, data: dict) -> None:
self.path.write_text(json.dumps(data, indent=2))
store = SimpleMemoryStore()
memory = store.load()
memory["last_topic"] = "insurance claims"
store.save(memory)
Testing It
Run the script once with a message like “My name is Sam.” The code should create agent_memory.json and store that fact.
Run it again with “What is my name?” and confirm the prompt includes the saved memory before calling AutoGen. If your model is configured correctly, it should answer using the stored value instead of guessing.
Check the JSON file directly to confirm persistence works across process restarts. If you want to test failure cases, delete the file and rerun; the code should fall back to an empty memory dictionary without crashing.
Next Steps
- •Replace the JSON file with Redis or Postgres for multi-user applications.
- •Add extraction logic so the agent automatically decides what to store as memory.
- •Move from prompt injection to retrieval-based memory using embeddings and a vector database.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit