How to Fix 'memory not persisting during development' in LangGraph (TypeScript)
What this error actually means
If your LangGraph app “forgets” conversation state during development, the graph is usually running without a durable checkpointer, or it’s getting a new thread_id on every request. In practice, you’ll see behavior like this: the first message is stored, the next request starts from scratch, and your agent acts like it has no memory.
The most common symptom is not a crash. It’s silent state loss, often while using createReactAgent, StateGraph, or any custom graph that expects checkpointed state across turns.
The Most Common Cause
The #1 cause is creating a graph without a checkpointer, or creating one but not passing a stable thread_id in configurable.
LangGraph memory persistence depends on both:
- •a compiled graph with a checkpointer
- •a consistent thread identifier for the same conversation
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
No checkpointer passed to compile() | Pass a checkpointer to compile() |
| New random thread id per request | Reuse the same thread_id for the conversation |
| Expecting memory from plain in-memory variables | Persist state through LangGraph checkpointing |
// ❌ Broken: no checkpointer, so state won't persist between requests
import { StateGraph, START, END } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (x, y) => x.concat(y),
default: () => [],
}),
});
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const graph = new StateGraph(State)
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
})
.addEdge(START, "agent")
.addEdge("agent", END)
.compile(); // ❌ no checkpointer
await graph.invoke(
{ messages: [{ role: "user", content: "My name is Sam" }] },
{ configurable: { thread_id: "dev-thread" } }
);
// ✅ Fixed: compile with a checkpointer and reuse the same thread_id
import { StateGraph, START, END } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph/checkpoint";
import { ChatOpenAI } from "@langchain/openai";
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (x, y) => x.concat(y),
default: () => [],
}),
});
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const checkpointer = new MemorySaver();
const graph = new StateGraph(State)
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
})
.addEdge(START, "agent")
.addEdge("agent", END)
.compile({ checkpointer });
await graph.invoke(
{ messages: [{ role: "user", content: "My name is Sam" }] },
{ configurable: { thread_id: "dev-thread" } }
);
await graph.invoke(
{ messages: [{ role: "user", content: "What is my name?" }] },
{ configurable: { thread_id: "dev-thread" } }
);
If you’re using an agent helper instead of raw graphs, the same rule applies. For example, with createReactAgent, you still need checkpointing support and a stable thread config.
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MemorySaver } from "@langchain/langgraph/checkpoint";
const agent = createReactAgent({
llm,
tools,
checkpointSaver: new MemorySaver(),
});
await agent.invoke(
{ messages: [{ role: "user", content: "Remember that I work in claims." }] },
{ configurable: { thread_id: "claims-dev-1" } }
);
Other Possible Causes
1) You are generating a new thread_id every time
If your code does something like crypto.randomUUID() on each request, LangGraph sees each call as a brand-new conversation.
// ❌ broken
const config = {
configurable: {
thread_id: crypto.randomUUID(),
},
};
// ✅ fixed
const config = {
configurable: {
thread_id: req.headers["x-session-id"] as string,
},
};
Use something stable:
- •session id
- •user id + workspace id
- •browser cookie-backed conversation id
2) Your reducer is overwriting state instead of merging it
If your state field doesn’t merge correctly, it can look like memory disappeared even though checkpointing works.
// ❌ broken
messages: Annotation<any[]>({
reducer: (_prev, next) => next,
default: () => [],
});
// ✅ fixed
messages: Annotation<any[]>({
reducer: (prev, next) => prev.concat(next),
default: () => [],
});
This matters when you expect prior messages to remain available across turns.
3) You are using an ephemeral store during development and restarting the process
MemorySaver only persists for the life of the Node process. If your dev server restarts on file change, memory disappears.
// ❌ fine for tests, bad for persistence across restarts
const checkpointer = new MemorySaver();
For real persistence during development:
// ✅ use a durable backend if you need persistence across restarts
// Example shape depends on your chosen storage implementation.
const checkpointer = /* Postgres / Redis / SQLite-backed saver */;
If you’re running Next.js or nodemon and saving files often, this is usually what’s happening.
4) You are invoking the graph without passing config on every turn
The first turn may work if you set config once in one place. The second turn fails if another code path omits it.
// ❌ broken
await graph.invoke(input); // no configurable.thread_id here
// ✅ fixed
await graph.invoke(input, {
configurable: {
thread_id,
},
});
This shows up in API handlers where one route includes config and another route doesn’t.
How to Debug It
- •
Check whether your graph was compiled with a checkpointer
- •Look for
.compile({ checkpointer }) - •If you only see
.compile(), that’s your first bug
- •Look for
- •
Log the exact config being sent
- •Print
configurable.thread_idbefore every invoke - •Confirm it stays identical across multiple requests in the same conversation
- •Print
- •
Inspect whether your dev server is restarting
- •If you use
nodemon,tsx watch, or hot reload in Next.js,MemorySaverresets on restart - •If memory disappears after file save, this is likely the issue
- •If you use
- •
Verify your state reducer behavior
- •If message history exists briefly but later turns act empty, inspect your reducer logic
- •A bad reducer can wipe prior values even with correct checkpointing
Prevention
- •Always compile graphs with an explicit checkpointer when state must survive multiple turns.
- •Treat
thread_idlike session identity data:- •stable per conversation
- •never randomly regenerated per request
- •Use durable storage early if you need persistence across restarts:
- •Redis-backed checkpointing for local dev parity
- •Postgres-backed checkpointing for production-style workflows
If you see “memory not persisting during development” in LangGraph TypeScript, start with these two checks:
- •Is there a real checkpointer?
- •Is the same
thread_idbeing reused?
Nine times out of ten, that’s the fix.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit