How to Fix 'memory not persisting' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
memory-not-persistinglanggraphtypescript

When LangGraph “memory is not persisting,” it usually means your graph is running fine, but the state store never gets a stable place to save data. In TypeScript, this shows up most often when you invoke a graph without a thread_id, forget to compile with a checkpointer, or accidentally recreate the memory layer on every request.

The symptom is simple: the first call works, but the second call behaves like nothing was remembered. You’ll often see behavior like MessagesPlaceholder coming back empty, or state values resetting between invocations.

The Most Common Cause

The #1 cause is missing or unstable thread identity when using a checkpointer. LangGraph persists state per thread, so if you don’t pass configurable.thread_id, or you generate a new one every time, there is nothing to resume from.

Here’s the broken pattern:

import { StateGraph, MemorySaver } from "@langchain/langgraph";
import { MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o-mini" });

const graph = new StateGraph(MessagesAnnotation)
  .addNode("chat", async (state) => {
    const response = await model.invoke(state.messages);
    return { messages: [response] };
  })
  .addEdge("__start__", "chat")
  .addEdge("chat", "__end__");

const app = graph.compile({ checkpointer: new MemorySaver() });

// Broken: no thread_id
await app.invoke({
  messages: [{ role: "user", content: "My name is Sam" }],
});

await app.invoke({
  messages: [{ role: "user", content: "What is my name?" }],
});

And here’s the fixed version:

import { StateGraph, MemorySaver } from "@langchain/langgraph";
import { MessagesAnnotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const checkpointer = new MemorySaver();

const graph = new StateGraph(MessagesAnnotation)
  .addNode("chat", async (state) => {
    const response = await model.invoke(state.messages);
    return { messages: [response] };
  })
  .addEdge("__start__", "chat")
  .addEdge("chat", "__end__");

const app = graph.compile({ checkpointer });

// Fixed: stable thread_id across calls
const config = {
  configurable: {
    thread_id: "customer-123",
  },
};

await app.invoke(
  {
    messages: [{ role: "user", content: "My name is Sam" }],
  },
  config
);

await app.invoke(
  {
    messages: [{ role: "user", content: "What is my name?" }],
  },
  config
);

If you are using MessagesState or any custom state schema, the rule is the same: persistence happens per thread. No stable thread_id, no memory.

Other Possible Causes

1. You compiled without a checkpointer

A graph can run perfectly without persistence support. In that case, each invocation is stateless.

// Broken
const app = graph.compile();

// Fixed
const app = graph.compile({
  checkpointer: new MemorySaver(),
});

If you see behavior like:

  • first request succeeds
  • second request starts from scratch

this is usually the reason.

2. You are creating a new MemorySaver on every request

This looks fine in local tests and fails in real apps.

// Broken
export async function handleRequest() {
  const app = graph.compile({ checkpointer: new MemorySaver() });
  return app.invoke(input, config);
}

Fix it by creating one shared instance:

// Fixed
const checkpointer = new MemorySaver();
const app = graph.compile({ checkpointer });

export async function handleRequest() {
  return app.invoke(input, config);
}

If you instantiate the saver inside a serverless handler, memory will disappear between warm/cold starts.

3. Your thread_id changes between calls

A common mistake is generating IDs too late or too often.

// Broken
await app.invoke(input1, {
  configurable: { thread_id: crypto.randomUUID() },
});

await app.invoke(input2, {
  configurable: { thread_id: crypto.randomUUID() },
});

Use one ID per conversation/session:

// Fixed
const threadId = "session-abc-001";

await app.invoke(input1, {
  configurable: { thread_id: threadId },
});

await app.invoke(input2, {
  configurable: { thread_id: threadId },
});

If you’re behind an API gateway or frontend session layer, map that session key to thread_id.

4. Your node returns state in the wrong shape

LangGraph will not persist what it cannot merge into the schema. A node that returns raw text instead of updating the expected field can look like “memory loss.”

// Broken
.addNode("chat", async (state) => {
  const response = await model.invoke(state.messages);
  return response; // wrong shape
})

Return partial state that matches your annotation:

// Fixed
.addNode("chat", async (state) => {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
})

If you use a custom reducer or annotation, make sure every node returns compatible keys.

How to Debug It

  1. Confirm you compiled with persistence

    • Look for compile({ checkpointer }).
    • If it’s missing, persistence will never happen.
    • If you are using MemorySaver, verify it exists outside request scope.
  2. Log the exact thread_id on every call

    • Print it before each invoke.
    • If it changes between requests, you found the bug.
    • For HTTP apps, inspect whether your session cookie/user ID maps consistently.
  3. Check whether state actually comes back from the second invoke

    • Use app.getState(config) if available in your setup.
    • If retrieved state is empty after a successful first call, your saver/config path is broken.
    • If state exists but your node ignores it, the issue is in your reducer/node logic.
  4. Reduce to one node and one field

    • Strip out tools, branching, and extra nodes.
    • Persist only a single message array or counter.
    • If that works, reintroduce complexity until it breaks.

Prevention

  • Create one shared checkpointer per process and compile once at startup.
  • Treat thread_id as part of your API contract; never generate it ad hoc inside nodes.
  • Keep node outputs aligned with your state schema so reducers can merge updates correctly.
  • In production, prefer a durable saver over MemorySaver if you need persistence across restarts.

The fastest way to fix “memory not persisting” in LangGraph TypeScript is to verify three things in order:

  • compile({ checkpointer })
  • stable configurable.thread_id
  • correct state shape coming out of nodes

If those are right and memory still resets, the bug is almost always in how your application creates sessions around the graph — not in LangGraph itself.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides