How to Fix 'deployment crash in production' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
deployment-crash-in-productionlangchaintypescript

A deployment crash in production in LangChain TypeScript usually means your app starts fine locally, then dies during startup or the first request in a deployed environment. In practice, it’s almost always one of three things: missing runtime env vars, an incompatible model/client setup, or code that works in dev but breaks under serverless/container constraints.

If you’re seeing this after shipping a LangChain app to Vercel, AWS Lambda, Docker, or Kubernetes, treat it as an environment/runtime mismatch first. The stack trace usually points at ChatOpenAI, OpenAIEmbeddings, RunnableSequence, or a failed network call during module initialization.

The Most Common Cause

The #1 cause is initializing LangChain clients at module scope with missing or invalid environment variables. In TypeScript projects, this often looks fine locally because .env is loaded by your dev tooling, but production never gets OPENAI_API_KEY, LANGSMITH_API_KEY, or the model name you assumed.

Typical runtime errors include:

  • Error: OpenAI API key not found
  • Error: Missing required environment variable OPENAI_API_KEY
  • TypeError: Cannot read properties of undefined (reading 'invoke')
  • Error: 401 Incorrect API key provided

Here’s the broken pattern:

// broken.ts
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4o-mini",
});

export async function handler() {
  const res = await llm.invoke("Summarize the policy");
  return res.content;
}

And here’s the fixed pattern:

// fixed.ts
import { ChatOpenAI } from "@langchain/openai";

function getLlm() {
  const apiKey = process.env.OPENAI_API_KEY;
  if (!apiKey) {
    throw new Error("OPENAI_API_KEY is missing");
  }

  return new ChatOpenAI({
    apiKey,
    model: "gpt-4o-mini",
  });
}

export async function handler() {
  const llm = getLlm();
  const res = await llm.invoke("Summarize the policy");
  return res.content;
}

The difference is simple:

  • Don’t construct external clients at import time unless you’re sure env vars are present.
  • Validate config before creating ChatOpenAI, OpenAIEmbeddings, or any vector store client.
  • Fail fast with a clear error instead of letting the deployment crash later.

This matters more in serverless runtimes because imported modules are executed during cold start. If your top-level code throws, your whole deployment can fail before handling a single request.

Other Possible Causes

1) Wrong package version mix

LangChain TypeScript packages move fast. A common production crash happens when @langchain/core, @langchain/openai, and older imports are out of sync.

Broken:

import { OpenAI } from "langchain/llms/openai";

Fixed:

import { ChatOpenAI } from "@langchain/openai";

Also make sure your package versions align:

{
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/openai": "^0.3.0"
  }
}

If you see errors like:

  • Cannot find module 'langchain/llms/openai'
  • TypeError: ... is not a constructor

you’re probably mixing old and new package APIs.

2) ESM/CommonJS mismatch

Production builds often use a different module system than local dev. If your app crashes with import-related errors, check whether Node expects ESM but your build emits CJS, or vice versa.

Broken config:

{
  "type": "module"
}

But code uses CommonJS:

const { ChatOpenAI } = require("@langchain/openai");

Fixed:

import { ChatOpenAI } from "@langchain/openai";

Or if you must stay on CommonJS, keep the whole toolchain consistent. Mixed module syntax causes crashes like:

  • ReferenceError: require is not defined in ES module scope
  • SyntaxError: Cannot use import statement outside a module

3) Network access blocked in production

LangChain itself may be fine; the deployment just can’t reach OpenAI or your provider. This shows up as timeouts or fetch failures.

Example symptom:

Error: fetch failed

Or:

ConnectTimeoutError: Request timed out

Check outbound network rules in:

  • VPC/security groups
  • Kubernetes network policies
  • Serverless egress restrictions
  • Corporate proxy requirements

If you use a custom base URL:

new ChatOpenAI({
  apiKey,
  model: "gpt-4o-mini",
  configuration: {
    baseURL: "https://your-proxy.example.com/v1",
  },
});

verify that endpoint is reachable from the deployed runtime.

4) Top-level async work during startup

If you do embedding generation, vector index loading, or remote config fetches at startup, your container can fail health checks before it becomes ready.

Broken:

const docs = await loader.load();
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);

Fixed:

export async function buildIndex() {
  const docs = await loader.load();
  return MemoryVectorStore.fromDocuments(docs, embeddings);
}

Keep startup cheap. Initialize heavy LangChain objects inside request handlers or background jobs.

How to Debug It

  1. Read the first real stack frame

    • Ignore the wrapper noise.
    • Find whether the crash starts in config loading, client construction, network calls, or serialization.
    • If you see ChatOpenAI, OpenAIEmbeddings, or RunnableSequence near the top, that narrows it fast.
  2. Log validated config at boot

    • Print only safe metadata.
    • Confirm presence of keys and model names.
console.log({
  hasOpenAIApiKey: !!process.env.OPENAI_API_KEY,
  nodeEnv: process.env.NODE_ENV,
});
  1. Reproduce in the same runtime

    • Run locally with production-like env vars:
      • Docker image
      • Node version used in prod
      • same build command as CI/CD
    • A lot of “works on my machine” LangChain bugs are just runtime differences.
  2. Strip the app down to one call

    • Remove tools, retrievers, memory, and chains.
    • Test only this:
const llm = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY!, model: "gpt-4o-mini" });
console.log(await llm.invoke("ping"));

If that fails, the issue is infrastructure/config. If that passes, re-add components until it breaks.

Prevention

  • Validate env vars at startup with a schema library like Zod instead of trusting process.env.
  • Keep LangChain package versions pinned and upgrade them together.
  • Avoid top-level network calls and heavy initialization in serverless entrypoints.
  • Test your exact production build artifact locally before shipping it.

If you want fewer deployment crashes with LangChain TypeScript, treat every external dependency as untrusted until validated. Most failures are not “LangChain bugs”; they’re startup-time config mistakes that only show up once code leaves your laptop.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides