How to Fix 'deployment crash' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
deployment-crashlangchaintypescript

When LangChain TypeScript code crashes during deployment, it usually means your app is fine locally but fails in the runtime environment you actually shipped to. The common pattern is a provider mismatch, missing environment variable, or an API shape that works in dev but breaks under serverless or container startup.

In practice, this shows up as a hard failure during build, cold start, or the first request. The error often looks like Error: deployment crash, TypeError: Cannot read properties of undefined, or a LangChain-specific failure from classes like ChatOpenAI, OpenAIEmbeddings, or RunnableSequence.

The Most Common Cause

The #1 cause is initializing a LangChain client at module scope with missing runtime config. In TypeScript apps deployed to Vercel, AWS Lambda, Cloud Run, or Docker, that means the import itself can crash before your handler runs.

Here’s the broken pattern:

BrokenFixed
```ts
// lib/llm.ts
import { ChatOpenAI } from "@langchain/openai";

export const llm = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY, model: "gpt-4o-mini", }); |ts // lib/llm.ts import { ChatOpenAI } from "@langchain/openai";

export function createLLM() { const apiKey = process.env.OPENAI_API_KEY; if (!apiKey) { throw new Error("OPENAI_API_KEY is missing"); }

return new ChatOpenAI({ apiKey, model: "gpt-4o-mini", }); }


Why this crashes:

- `process.env.OPENAI_API_KEY` is undefined in the deployed runtime
- `ChatOpenAI` gets constructed immediately on import
- your app dies before any request-level error handling can catch it

If you want the handler to fail cleanly, instantiate inside the request path:

```ts
import { createLLM } from "./lib/llm";

export async function POST() {
  const llm = createLLM();
  const res = await llm.invoke("Hello");
  return Response.json({ res });
}

Other Possible Causes

1) Wrong package import for your LangChain version

LangChain TS split integrations into separate packages. If you’re still importing old paths, deployment can fail with module resolution errors.

// Broken
import { ChatOpenAI } from "langchain/chat_models/openai";

// Fixed
import { ChatOpenAI } from "@langchain/openai";

Typical runtime error:

  • Cannot find module 'langchain/chat_models/openai'
  • ERR_MODULE_NOT_FOUND

2) ESM/CommonJS mismatch in TypeScript build output

LangChain packages are ESM-first. If your tsconfig.json and Node runtime disagree, deployment may crash during import.

{
  "compilerOptions": {
    "module": "commonjs",
    "moduleResolution": "node"
  }
}

Better for modern LangChain apps:

{
  "compilerOptions": {
    "module": "esnext",
    "moduleResolution": "bundler",
    "target": "es2022"
  }
}

Also make sure your package uses ESM correctly:

{
  "type": "module"
}

3) Missing edge/runtime-compatible dependencies

If you deploy to an edge runtime, some Node APIs are unavailable. A chain using filesystem access, crypto assumptions, or native Node clients can crash.

// Broken on edge runtimes if dependency expects Node APIs
import { ChatOpenAI } from "@langchain/openai";
import fs from "node:fs";

const prompt = fs.readFileSync("./prompt.txt", "utf8");

Fix by moving file access out of edge code or switching to a Node runtime:

export const runtime = "nodejs";

4) Invalid model/provider config

LangChain will throw if the model name or provider setup is wrong. This often looks like a provider error after deployment because local env differs from prod.

const llm = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-5-nonexistent",
});

You may see errors like:

  • Error: Model not found
  • 401 Unauthorized
  • 400 Invalid model

How to Debug It

  1. Check whether the crash happens on import or on request

    • Add a log before and after the LangChain constructor.
    • If logs stop at module load, you have a top-level initialization problem.
  2. Verify environment variables in the deployed runtime

    • Log only presence, not secrets:
    console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));
    
    • If it prints false, fix your deployment secret injection first.
  3. Inspect the exact stack trace

    • Look for class names like ChatOpenAI, OpenAIEmbeddings, RunnableSequence, or PromptTemplate.
    • If the stack points to an import line, it’s usually packaging or module format.
    • If it points inside .invoke() or .stream(), it’s usually config or API failure.
  4. Run production-like locally

    • Build and start exactly like prod:
    npm run build
    node dist/index.js
    
    • Don’t trust dev mode if your issue appears only after deployment.

Prevention

  • Instantiate LangChain clients inside functions, not at module scope.
  • Validate required env vars at startup with a hard fail:
    if (!process.env.OPENAI_API_KEY) throw new Error("OPENAI_API_KEY missing");
    
  • Keep your TS config aligned with ESM-first packages:
    • "module": "esnext"
    • "moduleResolution": "bundler"
  • Pin compatible versions of LangChain packages together:
    • langchain
    • @langchain/openai
    • any embedding/vector store integrations

If you’re seeing a generic “deployment crash,” assume it’s not one bug. Start with env vars and import-time initialization first; those are responsible for most LangChain TypeScript deployment failures I’ve seen in production systems.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides