LangChain Tutorial (TypeScript): persisting agent state for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainpersisting-agent-state-for-intermediate-developerstypescript

This tutorial shows you how to persist LangChain agent state in TypeScript so a conversation can stop, restart, and continue without losing context. You need this when agents handle multi-turn workflows like claims intake, KYC follow-ups, or human handoff where memory must survive process restarts.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • langchain
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with LangChain agents and chat models

Install the packages:

npm install langchain @langchain/openai @langchain/core
npm install -D typescript tsx @types/node

Set your API key:

export OPENAI_API_KEY="your-key"

Step-by-Step

  1. Start with a minimal agent setup that can read and write messages. We’ll use LangChain’s message history abstraction so state is attached to a session ID instead of living only in process memory.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function invokeModel(messages: (HumanMessage | AIMessage)[]) {
  return model.invoke(messages);
}
  1. Add a persistent store for chat history. For production, this is where you swap in Redis, Postgres, DynamoDB, or another durable backend; for the tutorial, an in-memory map keeps the code runnable end-to-end.
import { BaseChatMessageHistory } from "@langchain/core/chat_history";
import { BaseMessage } from "@langchain/core/messages";

class InMemoryHistory extends BaseChatMessageHistory {
  private messages: BaseMessage[] = [];

  async getMessages() {
    return this.messages;
  }

  async addMessage(message: BaseMessage) {
    this.messages.push(message);
  }

  async clear() {
    this.messages = [];
  }
}

const store = new Map<string, InMemoryHistory>();

function getSessionHistory(sessionId: string) {
  if (!store.has(sessionId)) {
    store.set(sessionId, new InMemoryHistory());
  }
  return store.get(sessionId)!;
}
  1. Wrap the model with message history. This is the key piece: every call gets the prior turns for that session, and every response is appended back into the same session history.
const chain = new RunnableWithMessageHistory({
  runnable: model,
  getMessageHistory: async (sessionId: string) => getSessionHistory(sessionId),
});

async function ask(sessionId: string, input: string) {
  const result = await chain.invoke(
    [new HumanMessage(input)],
    { configurable: { sessionId } }
  );

  console.log("Assistant:", result.content);
}
  1. Run two turns against the same session ID and confirm the agent remembers earlier context. If persistence is wired correctly, the second response should reference the first turn without you manually passing prior messages.
async function main() {
  const sessionId = "customer-123";

  await ask(sessionId, "My name is Jordan and I work at Acme Bank.");
  await ask(sessionId, "What is my name and where do I work?");
}

main().catch(console.error);
  1. If you want to persist across process restarts, replace the in-memory history with a real datastore-backed implementation. The interface stays the same; only getMessages, addMessage, and clear need to talk to your database or cache.
// Example shape for a production-backed history class:
// - load messages by sessionId from Redis/Postgres on getMessages()
// - append each new BaseMessage on addMessage()
// - delete rows/keys on clear()
//
// Keep the RunnableWithMessageHistory wrapper unchanged.
// Only swap getSessionHistory() to return your durable implementation.

Testing It

Run the script with npx tsx your-file.ts. The first prompt should establish identity details, and the second prompt should answer using those details because both turns share the same sessionId.

To verify persistence behavior properly, restart the process and run the same session again against a durable backend. If you still see prior context after restart, your storage layer is doing its job.

A common failure mode is accidentally generating a new session ID per request. If that happens, every call looks like a brand-new conversation and state will appear to “disappear.”

Next Steps

  • Swap the in-memory store for Redis or Postgres so sessions survive deploys and pod restarts.
  • Add tool calling to the same persisted agent so it can remember intermediate workflow state between tool invocations.
  • Store structured metadata alongside messages for audit trails, especially if you’re building regulated banking or insurance workflows.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides