LangChain Tutorial (TypeScript): adding memory to agents for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-memory-to-agents-for-intermediate-developerstypescript

This tutorial shows you how to add conversational memory to a LangChain agent in TypeScript so it can remember prior turns, keep context across messages, and behave like a real assistant instead of a stateless API call. You need this when your agent must answer follow-up questions, carry user preferences forward, or handle multi-step workflows without asking the same thing twice.

What You'll Need

  • Node.js 18+
  • TypeScript project with ts-node or tsx
  • An OpenAI API key in OPENAI_API_KEY
  • These packages:
    • langchain
    • @langchain/openai
    • @langchain/core
    • dotenv
    • typescript
    • tsx or ts-node

Install them like this:

npm install langchain @langchain/openai @langchain/core dotenv
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start with a basic TypeScript setup and load your API key from .env. This keeps the example production-friendly and avoids hardcoding secrets.
// src/index.ts
import "dotenv/config";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is missing");
}

console.log("Environment ready");
  1. Build a simple chat model first. We’ll use it later inside an agent, but it helps to confirm the model works before wiring memory into the agent loop.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function main() {
  const res = await model.invoke("Say hello in one short sentence.");
  console.log(res.content);
}

main();
  1. Add memory using BufferMemory and connect it to a conversation chain. This is the core pattern: every turn gets saved under a memory key, then injected back into the next prompt.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { BufferMemory } from "langchain/memory";
import { ConversationChain } from "langchain/chains";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const memory = new BufferMemory({
  memoryKey: "history",
  returnMessages: true,
});

const chain = new ConversationChain({
  llm: model,
  memory,
});

async function main() {
  console.log(await chain.invoke({ input: "My name is Ada." }));
  console.log(await chain.invoke({ input: "What is my name?" }));
}

main();
  1. Use that same memory pattern inside an agent with tools. For intermediate developers, this is the important part: agents need both tool access and conversation state, otherwise they forget prior context between tool calls and follow-up questions.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { BufferMemory } from "langchain/memory";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { DynamicTool } from "@langchain/core/tools";

const tools = [
  new DynamicTool({
    name: "lookup_policy_status",
    description: "Returns the status of an insurance policy by policy number.",
    func: async (policyNumber: string) => {
      return `Policy ${policyNumber} is active and paid through December.`;
    },
  }),
];

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const memory = new BufferMemory({
  memoryKey: "chat_history",
  returnMessages: true,
});

async function main() {
  const executor = await initializeAgentExecutorWithOptions(tools, model, {
    agentType: "chat-conversational-react-description",
    memory,
    verbose: true,
  });

  const first = await executor.invoke({ input: "Check policy A12345." });
  console.log(first.output);

  const second = await executor.invoke({
    input: "What did you just check for me?",
  });
  console.log(second.output);
}

main();
  1. If you want better control over what gets stored, switch from raw buffer memory to message-based history. This is the version you’ll usually want when building production agents because it maps cleanly to chat transcripts and avoids prompt-format surprises.
import { BufferMemory } from "langchain/memory";

const memory = new BufferMemory({
  memoryKey: "chat_history",
  returnMessages: true,
});

// Optional helper if you want to inspect state during debugging
async function dumpHistory(memoryInstance: BufferMemory) {
  const vars = await memoryInstance.loadMemoryVariables({});
  console.log(JSON.stringify(vars.chat_history, null, 2));
}

Testing It

Run the file with tsx or your preferred TypeScript runner:

npx tsx src/index.ts

Then ask two related questions in sequence. The second response should reflect earlier context, such as your name or the last policy number checked.

If the agent forgets everything between calls, check these three things first:

  • memoryKey matches what the chain or agent expects
  • returnMessages is set correctly for chat-based prompts
  • You are reusing the same memory instance across invocations

For agents specifically, keep verbose: true on while testing so you can see whether history is being injected before each tool decision.

Next Steps

  • Replace BufferMemory with persistent storage like Redis or Postgres for multi-session apps.
  • Learn about RunnableWithMessageHistory for newer LangChain patterns.
  • Add summarization memory once your transcript gets too large for the model context window.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides