LangGraph Tutorial (TypeScript): persisting agent state for intermediate developers
This tutorial shows you how to persist LangGraph agent state in TypeScript so a conversation can stop, restart, and continue without losing context. You need this when your agent runs across multiple requests, serverless invocations, or long-lived workflows where memory has to survive process restarts.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
tsconfig.json - •These packages:
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •
typescript - •
tsxfor local execution
- •
- •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangGraph nodes, edges, and state schemas
Step-by-Step
- •Start with a graph state that can hold messages and a stable thread identifier. The important part here is not the model call yet; it is making sure the graph knows how to merge message history across turns.
import { Annotation, StateGraph, START, END } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const GraphState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
type GraphStateType = typeof GraphState.State;
- •Build a node that calls the model and appends the response to state. This is the standard LangGraph pattern: take existing messages, pass them to the model, then return only the new assistant message.
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
async function assistantNode(state: GraphStateType) {
const response = await llm.invoke(state.messages);
return { messages: [new AIMessage(response.content as string)] };
}
- •Add a checkpointer so state survives between runs. Without this, LangGraph will keep state only for the life of the process; with it, you can resume by reusing the same
thread_id.
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const graph = new StateGraph(GraphState)
.addNode("assistant", assistantNode)
.addEdge(START, "assistant")
.addEdge("assistant", END)
.compile({ checkpointer });
- •Invoke the graph with a thread ID in
configurable. That ID is what ties one conversation together across multiple calls, so treat it like a session key in production.
async function run() {
const config = {
configurable: {
thread_id: "customer-123",
},
};
const first = await graph.invoke(
{ messages: [{ role: "user", content: "My policy number is P-20491." }] },
config
);
console.log("First turn:", first.messages.at(-1)?.content);
}
- •Call it again with the same thread ID and inspect persisted state before sending another user message. This is where you confirm that the prior exchange is still there and being merged into the next model call.
async function continueConversation() {
const config = {
configurable: {
thread_id: "customer-123",
},
};
const second = await graph.invoke(
{ messages: [{ role: "user", content: "What did I just tell you?" }] },
config
);
console.log("Second turn:", second.messages.at(-1)?.content);
const saved = await graph.getState(config);
console.log("Saved message count:", saved.values.messages.length);
}
- •Put it together in one executable file and run both turns back-to-back. The output should show that the second call has access to everything stored under the same thread.
async function main() {
await run();
await continueConversation();
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
Testing It
Run the file twice if you want to simulate a process restart. The key check is that using the same thread_id still returns a full conversation history after each invocation.
If you want to verify persistence more directly, log await graph.getState(config) after each turn and compare message counts. You should see the list grow as you add more user inputs.
For a real test, change the user input on the second call to something dependent on earlier context, like an account number or claim reference. If the model answers correctly without being re-prompted with that data manually, your checkpointing is working.
Next Steps
- •Swap
MemorySaverfor a durable checkpointer backed by Postgres or another database. - •Add structured state fields for things like
customerId,policyId, orhandoffRequired. - •Learn how to stream LangGraph updates so you can persist intermediate steps as they happen.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit