LangChain Tutorial (TypeScript): persisting agent state for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainpersisting-agent-state-for-advanced-developerstypescript

This tutorial shows how to persist LangChain agent state in TypeScript so an agent can stop, restart, and continue with the same conversation and memory. You need this when your agent handles real users, long-running workflows, or any workflow where losing context means losing money or creating bad UX.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A LangChain-compatible LLM API key:
    • OPENAI_API_KEY
  • Packages:
    • langchain
    • @langchain/openai
    • @langchain/core
  • A place to persist state:
    • This tutorial uses a simple JSON file for local development
    • In production, swap this for Postgres, Redis, DynamoDB, or MongoDB
  • A terminal and a TypeScript runtime setup:
    • tsx or ts-node

Step-by-Step

  1. Start with a project that can load and save agent state by session ID.
    The main idea is simple: every user gets a stable key, and you serialize the messages tied to that key after each turn.
npm init -y
npm i langchain @langchain/openai @langchain/core
npm i -D typescript tsx @types/node
  1. Create a small persistence layer for conversation history.
    For advanced systems, this is where you’d plug in durable storage. For now, a JSON file is enough to prove the pattern end-to-end.
// persistence.ts
import { promises as fs } from "node:fs";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

const FILE = "./agent-state.json";

export type StoredMessage =
  | { role: "human"; content: string }
  | { role: "ai"; content: string };

export async function loadMessages(sessionId: string) {
  const raw = await fs.readFile(FILE, "utf8").catch(() => "{}");
  const db = JSON.parse(raw) as Record<string, StoredMessage[]>;
  return (db[sessionId] ?? []).map((m) =>
    m.role === "human" ? new HumanMessage(m.content) : new AIMessage(m.content)
  );
}
  1. Add the matching save function and keep the format explicit.
    Storing plain role/content pairs makes migrations easier later because you are not serializing framework internals.
// persistence.ts
export async function saveMessages(
  sessionId: string,
  messages: Array<{ _getType(): string; content: unknown }>
) {
  const raw = await fs.readFile(FILE, "utf8").catch(() => "{}");
  const db = JSON.parse(raw) as Record<string, StoredMessage[]>;
  db[sessionId] = messages.map((m) => ({
    role: m._getType() === "human" ? "human" : "ai",
    content: String(m.content),
  }));
  await fs.writeFile(FILE, JSON.stringify(db, null, 2), "utf8");
}
  1. Build an agent that reads prior turns before generating a response.
    Here we use a chat model directly with message history. This is the cleanest way to persist state when you want deterministic control over what gets stored.
// agent.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
import { loadMessages, saveMessages } from "./persistence.js";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

export async function runTurn(sessionId: string, userInput: string) {
  const history = await loadMessages(sessionId);
  const messages = [...history, new HumanMessage(userInput)];
  const response = await model.invoke(messages);

  await saveMessages(sessionId, [...messages, response]);
  return response.content;
}
  1. Wire it into a CLI so you can test restart behavior locally.
    The important part is that each run uses the same sessionId. That simulates a real app where the session comes from a cookie, auth subject ID, or workflow record.
// index.ts
import { runTurn } from "./agent.js";

const sessionId = process.argv[2];
const input = process.argv.slice(3).join(" ");

if (!sessionId || !input) {
  console.log('Usage: npx tsx index.ts <sessionId> "<message>"');
  process.exit(1);
}

const output = await runTurn(sessionId, input);
console.log(output);
  1. Run it twice with the same session ID and confirm the agent remembers earlier context.
    If you ask a follow-up question on the second run, it should answer using the prior turn because the history was loaded back from disk.
export OPENAI_API_KEY="your-key-here"

npx tsx index.ts user-123 "My name is Priya and I work in claims."
npx tsx index.ts user-123 "What is my name and what team do I work in?"

Testing It

Run the first command and check that agent-state.json gets created with one entry under user-123. Then run the second command with the same session ID and verify the model answers using both facts from the earlier message.

If you change the session ID to something else like user-456, it should behave like a fresh conversation. That is how you confirm isolation between users.

For production testing, kill and restart your process between turns. If state persists correctly across restarts, your storage boundary is in the right place.

Next Steps

  • Replace the JSON file with Redis or Postgres using the same loadMessages / saveMessages interface.
  • Add tool calling and persist tool results alongside chat messages.
  • Move from raw message history to LangGraph if you need checkpointing for multi-step agent workflows with branching state.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides