LangGraph Tutorial (TypeScript): persisting agent state for beginners
This tutorial shows you how to persist LangGraph agent state in TypeScript using a checkpointer, so your agent can resume conversations after a process restart. You need this when you want thread memory that survives server restarts, deployments, or multiple requests hitting the same user session.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •An OpenAI API key in
OPENAI_API_KEY - •A place to store checkpoints:
- •For local development: in-memory checkpointer
- •For production: PostgreSQL or another durable store via a supported saver
Step-by-Step
- •Start with a graph that uses a real state shape and a single agent node. The key idea is that the graph state includes messages, and each run is tied to a
thread_id.
import { Annotation, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
const graph = new StateGraph(State)
.addNode("agent", async (state) => {
const response = await llm.invoke(state.messages);
return { messages: [response] };
})
.addEdge("__start__", "agent")
.addEdge("agent", "__end__");
- •Add a checkpointer. For beginners,
MemorySaveris the simplest way to see persistence behavior without setting up a database.
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const app = graph.compile({ checkpointer });
- •Run the graph with a stable
thread_id. That ID is what tells LangGraph which saved state to load on the next call.
async function main() {
const config = {
configurable: {
thread_id: "user-123",
},
};
const first = await app.invoke(
{ messages: [new HumanMessage("My name is Ada. Remember it.")] },
config
);
console.log("First run:", first.messages.at(-1)?.content);
}
main().catch(console.error);
- •Invoke the same thread again with no explicit memory payload. Because the state was checkpointed, LangGraph reloads the prior messages before running the next step.
async function continueConversation() {
const config = {
configurable: {
thread_id: "user-123",
},
};
const second = await app.invoke(
{ messages: [new HumanMessage("What is my name?")] },
config
);
console.log("Second run:", second.messages.at(-1)?.content);
}
continueConversation().catch(console.error);
- •Put both calls together so you can see persistence end to end. In production you would usually create one app instance at startup and reuse it for every request.
import { Annotation, StateGraph, MemorySaver } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
const app = new StateGraph(State)
.addNode("agent", async (state) => ({
messages: [await llm.invoke(state.messages)],
}))
.addEdge("__start__", "agent")
.addEdge("agent", "__end__")
.compile({ checkpointer: new MemorySaver() });
async function run() {
const config = { configurable: { thread_id: "user-123" } };
await app.invoke(
{ messages: [new HumanMessage("My name is Ada. Remember it.")] },
config
);
const result = await app.invoke(
{ messages: [new HumanMessage("What is my name?")] },
config
);
console.log(result.messages.at(-1)?.content);
}
run().catch(console.error);
Testing It
Run the script twice with the same thread_id. On the second call, the model should answer using context from the first call instead of acting like it has never seen the user before.
If you want to inspect the saved state directly, use the same config object and read from the checkpointed thread by invoking another step in the graph. The important thing to verify is that changing thread_id gives you isolated conversations, while reusing it restores prior state.
For local testing, restart your Node process between calls if you want to prove persistence within the lifetime of the saver instance only. With MemorySaver, state survives across invocations in memory, but not across process restarts.
Next Steps
- •Replace
MemorySaverwith a durable saver for Postgres-backed persistence. - •Add more state fields like
userProfile,toolResults, orriskFlags. - •Learn how to stream updates with
app.stream()so you can build responsive chat UIs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit