How to Fix 'state not updating in production' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
state-not-updating-in-productionlangchaintypescript

When LangChain state works locally but stops updating in production, the issue is usually not “LangChain is broken.” It means your stateful object is being recreated, mutated in the wrong place, or lost between requests. In TypeScript apps, this usually shows up when using RunnableWithMessageHistory, AgentExecutor, or custom memory/state stores behind serverless or multi-instance deployments.

The symptom is simple: your chain runs, but the next turn does not see the previous state. You’ll often see behavior like MessagesPlaceholder staying empty, chat_history never growing, or a graph/agent reporting stale state even though your code “updates” it.

The Most Common Cause

The #1 cause is creating state inside the request handler instead of keeping it stable across turns.

This happens a lot with BufferMemory, RunnableWithMessageHistory, and custom session stores. Locally, a single process hides the bug. In production, every request may hit a different instance, or your handler may reinitialize memory on each invocation.

Broken vs fixed pattern

Broken patternFixed pattern
State store created inside requestState store created once and keyed by session
New memory every callShared history factory/store
Works in local dev onlyWorks across requests and instances
// ❌ Broken: memory is recreated on every request
import { ChatOpenAI } from "@langchain/openai";
import { BufferMemory } from "langchain/memory";
import { ConversationChain } from "langchain/chains";

export async function POST(req: Request) {
  const { input } = await req.json();

  const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
  const memory = new BufferMemory(); // resets every request

  const chain = new ConversationChain({
    llm,
    memory,
  });

  const result = await chain.invoke({ input });
  return Response.json(result);
}
// ✅ Fixed: history is keyed by sessionId and stored outside the handler
import { ChatOpenAI } from "@langchain/openai";
import {
  RunnableWithMessageHistory,
} from "@langchain/core/runnables";
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

// Replace this with Redis/Postgres in production
const histories = new Map<string, ChatMessageHistory>();

function getHistory(sessionId: string) {
  if (!histories.has(sessionId)) {
    histories.set(sessionId, new ChatMessageHistory());
  }
  return histories.get(sessionId)!;
}

const chain = new RunnableWithMessageHistory({
  runnable: llm,
  getMessageHistory: (sessionId) => getHistory(sessionId),
  inputMessagesKey: "input",
  historyMessagesKey: "history",
});

export async function POST(req: Request) {
  const { input, sessionId } = await req.json();

  const result = await chain.invoke(
    { input },
    { configurable: { sessionId } }
  );

  return Response.json(result);
}

If you’re using RunnableWithMessageHistory, the critical detail is this:

  • pass a stable sessionId
  • persist history outside the request lifecycle
  • do not rely on process memory in production unless you have one long-lived instance

Other Possible Causes

1) You’re mutating state after awaiting an async boundary

If you update shared state after an await, another request can interleave and overwrite it.

// ❌ Broken
let counter = 0;

async function handler() {
  const current = counter;
  await someAsyncCall();
  counter = current + 1;
}
// ✅ Fixed
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL!);

async function handler(sessionId: string) {
  await redis.incr(`counter:${sessionId}`);
}

2) Your deployment is stateless and you used in-memory storage

This is common on Vercel, serverless functions, containers with multiple replicas, or any autoscaled service.

// ❌ Broken for production scale
const store = new Map<string, unknown>();

Use a real backend:

// ✅ Better: durable store
// Redis / Postgres / DynamoDB / Upstash / Cloudflare KV depending on latency needs

If you need message history, use a persistent implementation instead of ChatMessageHistory in memory.

3) Session IDs are missing or unstable

If the session key changes per request, LangChain will behave like every turn is brand new.

// ❌ Broken
await chain.invoke(
  { input },
  { configurable: { sessionId: crypto.randomUUID() } }
);
// ✅ Fixed
await chain.invoke(
  { input },
  { configurable: { sessionId: user.id } }
);

If you need anonymous users, issue a cookie-backed ID and reuse it across requests.

4) You’re reading the wrong key from chain output

Sometimes the state updates correctly, but your code logs the wrong field and makes it look broken. This shows up with agents returning structured output instead of plain text.

// ❌ Broken assumption
const result = await agentExecutor.invoke({ input });
console.log(result.output); // may be undefined depending on config
// ✅ Check actual shape
const result = await agentExecutor.invoke({ input });
console.log(result);

For AgentExecutor, output shape depends on your tools and parser configuration. Don’t guess; inspect the returned object first.

How to Debug It

  1. Print the session key on every request

    • Confirm it stays constant for the same user.
    • If it changes, your state will never persist.
  2. Log before and after state reads

    • For message history:
      console.log("before", await getHistory(sessionId).getMessages());
      
    • Then log again after invoke.
    • If “after” is empty, persistence is failing.
  3. Check whether you’re running multiple instances

    • In Kubernetes, ECS, Vercel, or Lambda-like environments, process memory is not shared.
    • If one request updates instance A and the next hits instance B, state appears lost.
  4. Inspect the exact LangChain object

    • If using RunnableWithMessageHistory, verify:
      • inputMessagesKey
      • historyMessagesKey
      • configurable.sessionId
    • If using agents/chains with memory, confirm you are not recreating them per request.

Prevention

  • Keep LangChain chains/runnables as singleton-style module objects when possible.
  • Store conversation state in Redis or another durable backend; treat in-memory maps as dev-only.
  • Use a stable identity for every conversation:
    • authenticated user ID
    • signed cookie session ID
    • explicit conversation ID from your database

If you want one rule to remember: production LangChain state must be externalized. If it lives only inside one Node.js process, it will eventually disappear.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides