How to Fix 'async event loop error in production' in LlamaIndex (TypeScript)
If you’re seeing async event loop error in production with LlamaIndex TypeScript, you’re usually hitting a runtime mismatch between how your app starts async work and how the underlying Node event loop is being managed. In practice, this shows up when LlamaIndex calls an async API from a place that expects sync execution, or when your production server wraps requests in a way that blocks or double-manages the loop.
The symptom is usually one of these:
- •
Error: This event loop is already running - •
Error: Cannot await outside of an async function - •
UnhandledPromiseRejectionWarning - •
LlamaIndexError: Failed to execute query engine
The Most Common Cause
The #1 cause is calling LlamaIndex async methods from non-async code, then trying to “force” the result with .then() chains, sync wrappers, or nested startup hooks.
This happens a lot in:
- •Express route handlers
- •Next.js API routes
- •background jobs
- •server startup scripts
Broken pattern vs fixed pattern
| Broken | Fixed |
|---|---|
Calls async LlamaIndex methods without proper await flow | Uses async handler and awaits the query |
| Mixes sync return paths with promises | Returns the promise chain cleanly |
Often triggers UnhandledPromiseRejectionWarning or loop-related errors | Keeps the event loop flow explicit |
// BROKEN
import express from "express";
import { Document, VectorStoreIndex } from "llamaindex";
const app = express();
app.get("/search", (req, res) => {
const query = req.query.q as string;
// Common mistake: fire-and-forget promise handling in a sync handler
VectorStoreIndex.fromDocuments([
new Document({ text: "Hello world" }),
]).then((index) => {
index.asQueryEngine().query({ query }).then((response) => {
res.json({ answer: response.toString() });
});
});
// Response may be sent before promises finish, or errors get swallowed
});
// FIXED
import express from "express";
import { Document, VectorStoreIndex } from "llamaindex";
const app = express();
app.get("/search", async (req, res, next) => {
try {
const query = req.query.q as string;
const index = await VectorStoreIndex.fromDocuments([
new Document({ text: "Hello world" }),
]);
const engine = index.asQueryEngine();
const response = await engine.query({ query });
res.json({ answer: response.toString() });
} catch (err) {
next(err);
}
});
The key detail: LlamaIndex TypeScript APIs like VectorStoreIndex.fromDocuments() and query() are async. If your request handler isn’t async end-to-end, production will eventually surface event-loop-related failures.
Other Possible Causes
1. Running LlamaIndex inside a serverless function with reused globals
If you cache state incorrectly across invocations, you can end up reusing stale async resources.
let globalIndexPromise: Promise<VectorStoreIndex> | null = null;
export const handler = async () => {
if (!globalIndexPromise) {
globalIndexPromise = VectorStoreIndex.fromDocuments(docs);
}
const index = await globalIndexPromise;
return await index.asQueryEngine().query({ query: "hello" });
};
Fix:
- •Keep initialization idempotent
- •Avoid holding partially initialized promises across cold starts unless you control lifecycle carefully
2. Mixing CommonJS and ESM in production builds
A bad transpilation setup can create weird runtime behavior around dynamic imports and top-level await.
{
"type": "module",
"compilerOptions": {
"module": "CommonJS"
}
}
Use one module system consistently:
- •ESM everywhere if your runtime supports it
- •Or CommonJS everywhere with compatible transpilation settings
3. Using top-level await in files that are not actually executed as ESM
This often fails only after deployment.
// BROKEN if compiled/run as CommonJS
const index = await VectorStoreIndex.fromDocuments(docs);
Fix by wrapping startup code:
async function main() {
const index = await VectorStoreIndex.fromDocuments(docs);
console.log("ready");
}
main().catch(console.error);
4. Swallowing promise rejections in background jobs
A job runner may report a generic loop error while the real issue is an unhandled rejection.
queue.process(async (job) => {
const engine = await buildEngine();
engine.query({ query: job.data.q }); // missing await
});
Fix:
queue.process(async (job) => {
const engine = await buildEngine();
return await engine.query({ query: job.data.q });
});
How to Debug It
- •
Find the first real stack trace
- •Don’t stop at
LlamaIndexError. - •Look for the first line pointing to your route handler, job processor, or startup script.
- •Don’t stop at
- •
Search for missing
await- •Check every call to:
- •
VectorStoreIndex.fromDocuments() - •
index.asQueryEngine().query() - •any custom retriever or embedding call
- •
- •If it returns a promise, await it.
- •Check every call to:
- •
Check your runtime mode
- •Confirm whether production runs as:
- •ESM
- •CommonJS
- •serverless function
- •edge runtime
- •A file that works locally can fail if production compiles differently.
- •Confirm whether production runs as:
- •
Add explicit logging around each async boundary
- •Log before and after each major step.
- •Example:
console.log("building index");
const index = await VectorStoreIndex.fromDocuments(docs);
console.log("querying");
const response = await index.asQueryEngine().query({ query });
console.log("done");
If it hangs between two logs, that’s your fault line.
Prevention
- •Make every LlamaIndex integration path explicitly async from entrypoint to response.
- •Standardize on one module system and one TypeScript target per service.
- •Wrap all background jobs and request handlers with try/catch and return awaited promises.
- •Add a smoke test that exercises
VectorStoreIndex,RetrieverQueryEngine, and your deployment runtime before shipping.
If you want this to stay out of production incidents, treat LlamaIndex like any other async I/O layer. The bug is usually not in LlamaIndex itself; it’s in how your app starts, awaits, and returns work around Node’s event loop.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit