LangChain Tutorial (TypeScript): adding memory to agents for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-memory-to-agents-for-beginnerstypescript

This tutorial shows you how to give a LangChain TypeScript agent short-term memory so it can remember earlier turns in the same conversation. You need this when your agent should stop acting stateless and start using context like a real support bot, intake assistant, or workflow helper.

What You'll Need

  • Node.js 18+
  • A TypeScript project
  • langchain
  • @langchain/openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with LangChain agents and tools

Install the packages:

npm install langchain @langchain/openai
npm install -D typescript tsx @types/node

Set your environment variable:

export OPENAI_API_KEY="your_api_key_here"

Step-by-Step

  1. Start with a basic agent that can use tools. Memory in LangChain is usually attached to the agent executor, not the model itself, so the first step is to build a normal tool-using agent.
import { ChatOpenAI } from "@langchain/openai";
import { DynamicTool } from "@langchain/core/tools";
import { initializeAgentExecutorWithOptions } from "langchain/agents";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const lookupPolicy = new DynamicTool({
  name: "lookup_policy",
  description: "Look up a policy by policy number",
  func: async (policyNumber: string) => {
    return `Policy ${policyNumber}: active, premium paid`;
  },
});

const tools = [lookupPolicy];

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "zero-shot-react-description",
});
  1. Add conversational memory using BufferMemory. This keeps prior messages in a session and makes them available to the agent on later calls.
import { BufferMemory } from "langchain/memory";

const memory = new BufferMemory({
  memoryKey: "chat_history",
  returnMessages: true,
});

const executorWithMemory = await initializeAgentExecutorWithOptions(
  tools,
  model,
  {
    agentType: "zero-shot-react-description",
    memory,
    verbose: true,
  }
);
  1. Run two turns against the same executor instance. The key point is that you reuse the same memory object across calls; if you create a new one each time, the agent forgets everything.
const first = await executorWithMemory.invoke({
  input: "My policy number is POL123. Check it for me.",
});

console.log("Turn 1:", first.output);

const second = await executorWithMemory.invoke({
  input: "What policy number did I just give you?",
});

console.log("Turn 2:", second.output);
  1. If you want better control over what gets stored, use a custom prompt with a MessagesPlaceholder. This is useful when you want the agent to see history explicitly instead of relying only on executor defaults.
import {
  ChatPromptTemplate,
  MessagesPlaceholder,
} from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful insurance assistant."],
  new MessagesPlaceholder("chat_history"),
  ["human", "{input}"],
]);

memory.memoryKey = "chat_history";
  1. Put it together in one runnable file. This version is simple enough for beginners but still matches how you would wire memory into a real TypeScript service.
import { ChatOpenAI } from "@langchain/openai";
import { DynamicTool } from "@langchain/core/tools";
import { BufferMemory } from "langchain/memory";
import { initializeAgentExecutorWithOptions } from "langchain/agents";

async function main() {
  const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

  const lookupPolicy = new DynamicTool({
    name: "lookup_policy",
    description: "Look up a policy by policy number",
    func: async (policyNumber: string) =>
      `Policy ${policyNumber}: active, premium paid`,
  });

  const memory = new BufferMemory({
    memoryKey: "chat_history",
    returnMessages: true,
  });

  const executor = await initializeAgentExecutorWithOptions([lookupPolicy], model, {
    agentType: "zero-shot-react-description",
    memory,
    verbose: true,
  });

  console.log(
    await executor.invoke({ input: "My policy number is POL123. Check it." })
  );
  console.log(
    await executor.invoke({ input: "What policy number did I give you?" })
  );
}

main();

Testing It

Run the script with tsx so TypeScript executes directly without a build step:

npx tsx src/agent-memory.ts

On the first turn, the agent should respond to your policy lookup request and may call the tool. On the second turn, it should answer with POL123 or otherwise reference that earlier message instead of asking again.

If it forgets, check these three things:

  • You reused the same memory instance across both calls
  • You set returnMessages: true
  • You are calling .invoke() on the same executor object

For multi-user apps, don’t share one global memory object across all users. Create one memory instance per conversation session and store it by session ID.

Next Steps

  • Try ConversationSummaryMemory when chats get too long for raw buffer history.
  • Add persistent storage for chat history so memory survives process restarts.
  • Wire memory into an Express or Fastify route and scope it per user session.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides