LangGraph Tutorial (TypeScript): adding memory to agents for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langgraphadding-memory-to-agents-for-intermediate-developerstypescript

This tutorial shows you how to add short-term memory to a LangGraph agent in TypeScript so it can remember prior turns across a conversation. You need this when your agent must keep context between messages instead of treating every request like a fresh session.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with tsconfig.json
  • @langchain/langgraph
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key set as OPENAI_API_KEY
  • A terminal and a text editor

Step-by-Step

  1. Start by installing the packages and setting up your environment. This example uses OpenAI chat models and LangGraph’s built-in checkpointing to persist state between turns.
npm install @langchain/langgraph @langchain/openai @langchain/core
  1. Define the graph state, model, and checkpointer. The important part is the MemorySaver, which stores the conversation state keyed by thread_id.
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, MemorySaver, MessagesAnnotation } from "@langchain/langgraph";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const checkpointer = new MemorySaver();

const graph = new StateGraph(MessagesAnnotation)
  .addNode("assistant", async (state) => {
    const response = await model.invoke(state.messages);
    return { messages: [response] };
  })
  .addEdge("__start__", "assistant")
  .addEdge("assistant", "__end__")
  .compile({ checkpointer });
  1. Invoke the graph with a stable thread_id. If you reuse the same thread ID, LangGraph loads prior messages from memory and appends the new turn to that conversation.
async function run() {
  const config = { configurable: { thread_id: "customer-123" } };

  const first = await graph.invoke(
    { messages: [new HumanMessage("My name is Sam. I handle claims for Acme Insurance.")] },
    config
  );

  console.log("First reply:", first.messages.at(-1)?.content);

  const second = await graph.invoke(
    { messages: [new HumanMessage("What is my name and company?")] },
    config
  );

  console.log("Second reply:", second.messages.at(-1)?.content);
}

run().catch(console.error);
  1. If you want to inspect what LangGraph is storing, read the thread state directly. This is useful when debugging why an agent forgot something or repeated itself.
async function inspect() {
  const config = { configurable: { thread_id: "customer-123" } };
  const state = await graph.getState(config);

  console.log(
    state.values.messages.map((m) => ({
      type: m._getType(),
      content: m.content,
    }))
  );
}

inspect().catch(console.error);
  1. Wrap the graph in a small reusable function so your application code stays clean. In production, this is where you’d attach user IDs, tenant IDs, or case IDs to the thread identifier.
export async function askAgent(threadId: string, input: string) {
  const result = await graph.invoke(
    { messages: [new HumanMessage(input)] },
    { configurable: { thread_id: threadId } }
  );

  return result.messages.at(-1)?.content;
}

Testing It

Run the script once with a prompt like “My name is Sam.” Then run it again with “What is my name?” using the same thread_id. If memory is wired correctly, the second response should reference Sam without you resending that detail.

Change the thread_id to something else like "customer-456" and repeat the test. The agent should no longer know about Sam because it’s now reading from a different conversation thread.

If you’re getting fresh answers every time, check two things first: whether you’re reusing the same thread_id, and whether MemorySaver is passed into .compile({ checkpointer }). Those two pieces are what make persistence work.

Next Steps

  • Add summarization for long conversations so your message history does not grow forever.
  • Replace MemorySaver with a persistent store backed by Redis or Postgres for multi-instance deployments.
  • Add tool calling so your memory-aware agent can combine conversation history with live system data.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides