How to Fix 'deployment crash in production' in LangGraph (TypeScript)
If your LangGraph app crashes only after deployment, you’re usually dealing with a runtime mismatch, not a graph logic problem. In TypeScript projects this often shows up as Error: deployment crash in production, or a downstream failure like Cannot read properties of undefined, Node not found, or Graph compilation failed once the app is bundled and started in a different environment.
This usually happens when code that worked locally depends on dev-only behavior, missing environment variables, unsupported Node versions, or state shape mismatches that only surface under production startup.
The Most Common Cause
The #1 cause is building the graph with runtime-only values during module import, then deploying to an environment where those values are missing or different.
This is common in Next.js, serverless functions, and containers where the module is imported before env vars are ready, or where the graph is compiled once at startup with invalid config.
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
| Graph compiles at import time using missing env/config | Graph factory builds after config validation |
checkpointer, model, or API key is undefined | Explicit runtime validation before graph creation |
// ❌ Broken: compiles immediately on import
import { StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY, // undefined in prod startup
model: "gpt-4o-mini",
});
const graph = new StateGraph({
channels: {
messages: { value: (x: any, y: any) => x.concat(y), default: () => [] },
},
})
.addNode("agent", async (state) => {
const res = await model.invoke(state.messages);
return { messages: [res] };
})
.addEdge("__start__", "agent")
.addEdge("agent", "__end__")
.compile();
export default graph;
// ✅ Fixed: validate config and build lazily
import { StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
function requireEnv(name: string): string {
const value = process.env[name];
if (!value) throw new Error(`Missing required env var: ${name}`);
return value;
}
export function createGraph() {
const model = new ChatOpenAI({
apiKey: requireEnv("OPENAI_API_KEY"),
model: "gpt-4o-mini",
});
return new StateGraph({
channels: {
messages: { value: (x: any, y: any) => x.concat(y), default: () => [] },
},
})
.addNode("agent", async (state) => {
const res = await model.invoke(state.messages);
return { messages: [res] };
})
.addEdge("__start__", "agent")
.addEdge("agent", "__end__")
.compile();
}
The important part is not whether you use OpenAI specifically. The issue is compiling the graph before you know the runtime configuration is valid.
Other Possible Causes
1. Invalid state schema or reducer shape
LangGraph is strict about state updates. If your node returns a field that doesn’t match the declared channel shape, you’ll get failures like InvalidUpdateError or weird undefined behavior during execution.
// ❌ Broken
const graph = new StateGraph({
channels: {
messages: { value: (x: any, y: any) => x.concat(y), default: () => [] },
},
});
.addNode("agent", async () => {
return { messagez: ["oops"] }; // typo
});
// ✅ Fixed
.addNode("agent", async () => {
return { messages: ["ok"] };
});
2. Importing Node-only modules into an Edge runtime
If your deployment target is Edge but your graph uses fs, native dependencies, or Node APIs, production will crash even though local dev works.
// ❌ Broken in Edge/runtime-isolated environments
import fs from "node:fs";
const data = fs.readFileSync("./prompt.txt", "utf8");
// ✅ Fixed
export const runtime = "nodejs";
// or move file access to a Node-only service layer
3. Version mismatch between LangGraph packages
A common production issue is mixing incompatible versions of @langchain/langgraph, @langchain/core, and provider packages. That can surface as TypeError during .compile() or when invoking nodes.
{
"dependencies": {
"@langchain/langgraph": "^0.2.0",
"@langchain/core": "^0.1.0"
}
}
Keep the ecosystem aligned and lock versions in package-lock.json / pnpm-lock.yaml.
4. Missing checkpointer or store configuration
If your graph expects persistence but the deployed environment doesn’t provide it, you can see failures around thread state or resume behavior.
// ❌ Broken if your workflow depends on persistence
const app = graph.compile();
// ✅ Fixed when using memory/state persistence
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const app = graph.compile({ checkpointer });
How to Debug It
- •
Reproduce with production settings locally
- •Run the same Node version, same env vars, same build command.
- •If it fails only after bundling, this points to import-time initialization or runtime incompatibility.
- •
Log graph creation separately from invocation
- •Put logs before and after
.compile(). - •If it crashes before the first request hits the app, the problem is in startup/configuration.
- •Put logs before and after
- •
Validate every node output
- •Temporarily log node returns:
addNode("agent", async (state) => { const out = await runAgent(state); console.log("agent output:", out); return out; }); - •Look for wrong keys like
messagez, missing arrays, or non-serializable values.
- •Temporarily log node returns:
- •
Check deployment runtime constraints
- •Confirm whether your platform uses Node.js or Edge.
- •Verify package versions and whether any dependency pulls in Node-only APIs.
Prevention
- •Build graphs through a factory function, not at module top-level.
- •Validate env vars before creating models, checkpointers, and stores.
- •Keep state schemas explicit and test every node output against them.
- •Pin LangGraph-related package versions together and deploy with the same Node version you use locally.
If you’re seeing deployment crash in production in LangGraph TypeScript apps, start with startup-time config and module initialization first. In most cases the graph itself is fine; what breaks is how and when it gets constructed.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit