How to Fix 'deployment crash in production' in LlamaIndex (TypeScript)
What this error usually means
deployment crash in production in a LlamaIndex TypeScript app usually means your process starts fine locally, then dies once it hits a runtime-only dependency, bad environment config, or an async failure that never gets handled. In practice, this shows up during server startup, container boot, or the first request that triggers index creation.
The stack trace is often not very helpful on its own. You’ll usually see something like Error: deployment crash in production, followed by a failed import, missing API key, TypeError: Cannot read properties of undefined, or an unhandled rejection from OpenAIEmbedding / OpenAI / VectorStoreIndex.
The Most Common Cause
The #1 cause is doing LlamaIndex initialization at module load time instead of inside a request-safe startup path. In TypeScript deployments, especially on Vercel, AWS Lambda, Docker, or Node servers with health checks, top-level code can crash the whole deployment if env vars are missing or network calls happen during import.
Here’s the broken pattern:
// broken.ts
import { Document, VectorStoreIndex } from "llamaindex";
const docs = [
new Document({ text: "Quarterly claims summary..." }),
];
// This runs as soon as the module is imported
const index = await VectorStoreIndex.fromDocuments(docs);
export async function handler() {
const result = await index.asQueryEngine().query({
query: "What changed this quarter?",
});
return result.toString();
}
And here’s the fixed pattern:
// fixed.ts
import { Document, VectorStoreIndex } from "llamaindex";
let index: VectorStoreIndex | null = null;
async function getIndex() {
if (index) return index;
const docs = [
new Document({ text: "Quarterly claims summary..." }),
];
index = await VectorStoreIndex.fromDocuments(docs);
return index;
}
export async function handler() {
const idx = await getIndex();
const result = await idx.asQueryEngine().query({
query: "What changed this quarter?",
});
return result.toString();
}
| Broken pattern | Fixed pattern |
|---|---|
await VectorStoreIndex.fromDocuments(...) at top level | Lazy initialization inside a function |
| Crashes during import/startup | Fails only when the handler runs |
| Hard to recover in serverless | Cache and retry safely |
If you’re seeing UnhandledPromiseRejectionWarning or a deploy log that stops at VectorStoreIndex.fromDocuments, this is usually the culprit.
Other Possible Causes
1) Missing environment variables
LlamaIndex TypeScript integrations commonly depend on OPENAI_API_KEY. If it’s absent in production but present locally, you’ll get errors like:
- •
Error: OpenAI API key not found - •
Invalid value for parameter apiKey - •
Cannot read properties of undefined (reading 'apiKey')
// broken
const apiKey = process.env.OPENAI_API_KEY!;
// fixed
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("OPENAI_API_KEY is required");
}
If you use provider-specific settings, check all of them:
OPENAI_API_KEY=...
LLAMAINDEX_STORAGE_DIR=/tmp/index
NODE_ENV=production
2) Importing Node-only code into an edge runtime
LlamaIndex TypeScript often expects Node APIs. If you deploy to an edge runtime that blocks filesystem access or certain network modules, you can get failures like:
- •
ReferenceError: process is not defined - •
fs is not available - •runtime crashes during bundling
// broken in edge runtime
import { VectorStoreIndex } from "llamaindex";
import fs from "node:fs";
// fixed: run in Node runtime only
export const runtime = "nodejs";
If you’re on Next.js App Router, force Node:
export const runtime = "nodejs";
3) Version mismatch between LlamaIndex packages
Mixing incompatible package versions can produce weird runtime errors around class names like Document, Settings, OpenAIEmbedding, or VectorStoreIndex.
{
"dependencies": {
"llamaindex": "^0.4.0",
"@llamaindex/openai": "^0.2.0"
}
}
Fix by aligning versions and reinstalling cleanly:
rm -rf node_modules package-lock.json
npm install
npm ls llamaindex @llamaindex/openai
4) Unhandled async error during ingestion
A failed embedding call or malformed document can crash the process if you don’t catch it.
// broken
await VectorStoreIndex.fromDocuments(docs);
// fixed
try {
await VectorStoreIndex.fromDocuments(docs);
} catch (err) {
console.error("Failed to build index", err);
throw err;
}
This matters when you see logs like:
- •
Error: Request timed out - •
429 Too Many Requests - •
TypeError: Cannot read properties of null (reading 'text')
How to Debug It
- •
Check whether the crash happens on import or on request
- •Add logs before and after each LlamaIndex call.
- •If it dies before your handler runs, you likely have top-level initialization.
- •
Print every required env var at startup
- •Don’t log secrets.
- •Log presence only:
console.log({ hasOpenAiKey: !!process.env.OPENAI_API_KEY, nodeEnv: process.env.NODE_ENV, }); - •
Wrap ingestion and querying in try/catch
- •Catch the real error instead of the generic deployment failure.
- •Look for root messages like:
- •
OpenAI API key not found - •
429 Too Many Requests - •
Cannot read properties of undefined
- •
- •
Verify runtime and package compatibility
- •Confirm you are not deploying Node-targeted LlamaIndex code into an edge runtime.
- •Run:
npm ls llamaindex @llamaindex/openai openai- •If versions are mismatched, pin them and reinstall.
Prevention
- •Initialize indexes lazily inside request handlers or startup hooks, not at module scope.
- •Fail fast on missing config with explicit checks for API keys and storage paths.
- •Pin compatible package versions and test the exact production runtime locally with the same Node version and environment variables.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit