LangGraph Tutorial (TypeScript): adding memory to agents for beginners

By Cyprian AaronsUpdated 2026-04-21
langgraphadding-memory-to-agents-for-beginnerstypescript

This tutorial shows how to add persistent memory to a LangGraph agent in TypeScript using a checkpointer and thread IDs. You need this when you want an agent to remember prior turns across multiple requests instead of treating every message like a brand-new conversation.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key
  • Packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • dotenv
  • A project set up with "type": "module" or equivalent ESM support
  • A .env file with OPENAI_API_KEY=...

Step-by-Step

  1. First, install the dependencies and create a small TypeScript project. This tutorial uses the built-in LangGraph memory pattern: a graph, a checkpointer, and a stable thread_id.
npm init -y
npm install @langchain/langgraph @langchain/openai @langchain/core dotenv
npm install -D typescript tsx @types/node
  1. Create your agent graph with a single model node. The important part is that the graph accepts and returns messages, because the checkpointer will store those messages between runs.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, MessagesAnnotation, StateGraph } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", async (state) => {
    const response = await model.invoke(state.messages);
    return { messages: [response] };
  })
  .addEdge("__start__", "agent")
  .addEdge("agent", "__end__");
  1. Add a checkpointer and compile the graph. MemorySaver keeps conversation state in process memory, which is enough for local development and beginner testing.
const checkpointer = new MemorySaver();
const app = workflow.compile({ checkpointer });
  1. Run the same graph twice with the same thread_id. On the second call, LangGraph loads the previous messages automatically, so the model can answer with context from earlier turns.
async function main() {
  const config = {
    configurable: {
      thread_id: "customer-123",
    },
  };

  const first = await app.invoke(
    { messages: [{ role: "user", content: "My name is Priya." }] },
    config
  );

  console.log("First response:", first.messages.at(-1)?.content);

  const second = await app.invoke(
    { messages: [{ role: "user", content: "What is my name?" }] },
    config
  );

  console.log("Second response:", second.messages.at(-1)?.content);
}

main();
  1. If you want to inspect what LangGraph stored, fetch the saved state directly. This is useful when debugging memory bugs, especially when agents seem to forget earlier turns or duplicate messages.
async function inspectState() {
  const config = {
    configurable: {
      thread_id: "customer-123",
    },
  };

  const state = await app.getState(config);
  console.log(
    state.values.messages.map((m) => ({
      role: m.type,
      content: m.content,
    }))
  );
}

inspectState();

Testing It

Run the script once and confirm the first response answers only from the current input. Then run it again with the same thread_id and ask a follow-up question that depends on prior context, like “What is my name?”

If memory is working, the second response should reference “Priya” without you sending that name again. If it does not, check three things: you used the same thread_id, you compiled with a checkpointer, and your graph returns { messages: [...] } from the node.

For a quick sanity check, change thread_id to something else like "customer-456" and rerun. That new thread should have no access to the earlier conversation.

Next Steps

  • Replace MemorySaver with a durable checkpointer backed by Postgres or another persistent store.
  • Add tool calling so your agent can remember facts and also take actions.
  • Learn how to trim or summarize message history so long conversations do not blow up token usage.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides