LangChain Tutorial (TypeScript): persisting agent state for beginners
This tutorial shows you how to persist agent state in a TypeScript LangChain app so a conversation can survive across requests, restarts, and multiple turns. You need this when your agent must remember prior messages, tool outputs, or user context instead of starting from zero every time.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeortsx - •
langchain - •
@langchain/openai - •
@langchain/core - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with LangChain agents and chat models
Install the packages:
npm install langchain @langchain/openai @langchain/core
npm install -D typescript tsx @types/node
Step-by-Step
- •Create a simple TypeScript project and set your environment variable.
We’ll keep this example small enough to run locally, but the same pattern works in production with Redis, Postgres, or any durable store.
// .env.example
OPENAI_API_KEY=your_api_key_here
- •Build a persistent checkpointer with an in-memory store first.
This is the easiest way to understand the mechanism: the agent saves its state after each turn, then reloads it when the same thread ID is used again.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import {
Annotation,
MessagesAnnotation,
StateGraph,
} from "@langchain/langgraph";
const State = Annotation.Root({
messages: MessagesAnnotation.spec,
});
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const graph = new StateGraph(State)
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
})
.addEdge("__start__", "agent")
.addEdge("agent", "__end__");
- •Add a checkpointer and pass a stable thread ID at runtime.
The checkpointer is what persists state between calls. If you reuse the samethread_id, LangGraph restores the previous message history automatically.
import { MemorySaver } from "@langchain/langgraph";
const app = graph.compile({
checkpointer: new MemorySaver(),
});
const config = {
configurable: {
thread_id: "user-123",
},
};
async function main() {
const first = await app.invoke(
{ messages: [new HumanMessage("My name is Amina. Remember it.")] },
config
);
console.log(first.messages.at(-1)?.content);
}
main();
- •Send a second message using the same thread ID.
This is the actual persistence test. The agent should now have access to the previous turn because the stored state was restored before generating the next response.
import { HumanMessage } from "@langchain/core/messages";
async function continueConversation() {
const second = await app.invoke(
{ messages: [new HumanMessage("What is my name?")] },
config
);
console.log(second.messages.at(-1)?.content);
}
continueConversation();
- •Inspect the full message history to confirm state was saved correctly.
In real systems, this is useful for debugging memory bugs and verifying that each user session has isolated state.
async function showHistory() {
const history = await app.getState(config);
console.log(
history.values.messages.map((m) => ({
type: m._getType(),
content: m.content,
}))
);
}
showHistory();
Testing It
Run the file twice or keep both invocations in one script. The first call should establish context, and the second call should answer using that context instead of acting like a fresh chat.
If you see the model forget earlier messages, check two things first:
- •You reused the same
thread_id - •You compiled the graph with a checkpointer
For production-style validation, create two different thread IDs and confirm they do not share memory. That tells you your persistence boundary is correct.
Next Steps
- •Swap
MemorySaverfor a durable backend like Redis or Postgres - •Add tool-calling nodes so your persisted agent can use external systems
- •Store metadata alongside messages for tenant isolation and audit logging
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit