AutoGen Tutorial (TypeScript): adding memory to agents for beginners
This tutorial shows how to give an AutoGen TypeScript agent a simple persistent memory layer, so it can remember user preferences and prior facts across turns. You need this when a stateless assistant keeps repeating questions, forgetting context, or needs to carry customer-specific details across a session.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
npmorpnpm - •An OpenAI API key
- •AutoGen for TypeScript installed from npm
- •A project with
"type": "module"inpackage.json
Install the packages:
npm install @autogenai/autogen openai dotenv
npm install -D typescript tsx @types/node
Set your API key in .env:
OPENAI_API_KEY=your_key_here
Step-by-Step
- •Create a tiny memory store first.
For beginners, start with an in-process map keyed by user ID. This is not your final production store, but it makes the pattern obvious and keeps the example executable.
// memory.ts
export type MemoryRecord = {
facts: string[];
};
const memory = new Map<string, MemoryRecord>();
export function getMemory(userId: string): MemoryRecord {
if (!memory.has(userId)) {
memory.set(userId, { facts: [] });
}
return memory.get(userId)!;
}
export function addFact(userId: string, fact: string) {
const record = getMemory(userId);
if (!record.facts.includes(fact)) {
record.facts.push(fact);
}
}
- •Build the agent and inject memory into every prompt.
The simplest useful pattern is: load stored facts before each response, then ask the model to use them as durable context. That gives you “memory” without needing a custom vector database on day one.
// agent.ts
import "dotenv/config";
import OpenAI from "openai";
import { addFact, getMemory } from "./memory.js";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function chat(userId: string, message: string) {
const memory = getMemory(userId);
const systemPrompt = [
"You are a helpful assistant.",
"Use the remembered facts below when relevant.",
`Remembered facts: ${memory.facts.length ? memory.facts.join(" | ") : "none"}`
].join("\n");
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: message }
]
});
return response.choices[0]?.message?.content ?? "";
}
- •Add a small extractor that stores new facts after each turn.
You do not want the model to guess what should be remembered. Instead, use explicit rules so only stable user preferences and profile data get saved.
// extract.ts
import { addFact } from "./memory.js";
export function storeFactsFromMessage(userId: string, message: string) {
const lower = message.toLowerCase();
if (lower.includes("my name is")) {
addFact(userId, message.trim());
}
if (lower.includes("i prefer")) {
addFact(userId, message.trim());
}
if (lower.includes("call me")) {
addFact(userId, message.trim());
}
}
- •Wire everything together in a runnable script.
This script simulates a conversation and shows that the second turn can use information learned earlier. In production, you would callstoreFactsFromMessage()after each user message in your chat loop.
// index.ts
import "dotenv/config";
import { chat } from "./agent.js";
import { storeFactsFromMessage } from "./extract.js";
async function main() {
const userId = "user-123";
const firstMessage = "Hi, my name is Sam and I prefer short answers.";
storeFactsFromMessage(userId, firstMessage);
console.log("User:", firstMessage);
console.log("Assistant:", await chat(userId, firstMessage));
const secondMessage = "What do you remember about me?";
console.log("\nUser:", secondMessage);
console.log("Assistant:", await chat(userId, secondMessage));
}
main().catch(console.error);
- •Make sure TypeScript runs cleanly with ESM imports.
If your project does not already have an ESM setup, use this minimal config so the.jsimport paths work correctly at runtime after TypeScript compilation or withtsx.
{
"name": "autogen-memory-tutorial",
"type": "module",
"scripts": {
"dev": "tsx index.ts"
},
"dependencies": {
"@autogenai/autogen": "^0.0.0",
"dotenv": "^16.4.5",
"openai": "^4.0.0"
},
"devDependencies": {
"@types/node": "^22.0.0",
"tsx": "^4.19.0",
"typescript": "^5.0.0"
}
}
Testing It
Run npm run dev and check that the assistant mentions remembered details like your name or preference on the second prompt. If it ignores them, inspect whether storeFactsFromMessage() actually saved anything for that user ID.
Try two separate user IDs and confirm their memories do not mix. That matters in real systems because cross-user leakage is a hard failure.
Then test a third turn like “Do you remember my preference?” and make sure the response uses the stored fact rather than asking again.
Next Steps
- •Replace the in-memory
Mapwith Redis or Postgres so memory survives process restarts. - •Add a proper fact extraction step using an LLM schema output instead of simple keyword rules.
- •Store different memory types separately:
- •profile facts
- •conversation summaries
- •task state
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit