How to Fix 'async event loop error when scaling' in LangChain (TypeScript)
When you see async event loop error when scaling in a LangChain TypeScript app, it usually means you’re creating or reusing async resources in a way that breaks under concurrency. In practice, this shows up when the app works locally, then starts failing once you add parallel requests, serverless handlers, or background jobs.
The root issue is almost always lifecycle management: an LLM client, retriever, vector store, or chain is being shared across requests while some part of the stack expects per-request async setup and teardown.
The Most Common Cause
The #1 cause is reusing a singleton chain/client that holds async state across concurrent invocations.
This happens a lot with ChatOpenAI, OpenAIEmbeddings, vector stores, and custom tools inside long-lived Node processes. The code looks fine until multiple requests hit the same instance at once.
Broken pattern vs fixed pattern
| Broken | Fixed |
|---|---|
| Shared global chain/client | Create per-request instances or use a safe factory |
| Async init hidden in module scope | Explicit init inside request handler |
| Concurrency on mutable state | Isolated state per invocation |
// ❌ Broken: shared singleton across requests
import { ChatOpenAI } from "@langchain/openai";
import { RunnableSequence } from "@langchain/core/runnables";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const chain = RunnableSequence.from([
// ... prompt, model, parser
]);
export async function handler(req: Request) {
const body = await req.json();
// Under load this can surface as:
// "Error: Cannot read properties of undefined"
// or event-loop / async resource errors from nested dependencies
const result = await chain.invoke({ input: body.input });
return Response.json(result);
}
// ✅ Fixed: create request-scoped instances
import { ChatOpenAI } from "@langchain/openai";
import { RunnableSequence } from "@langchain/core/runnables";
function buildChain() {
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
return RunnableSequence.from([
// ... prompt, model, parser
]);
}
export async function handler(req: Request) {
const body = await req.json();
const chain = buildChain();
const result = await chain.invoke({ input: body.input });
return Response.json(result);
}
If you’re using a retriever or vector store, the same rule applies. Don’t keep one mutable async pipeline instance around if it’s touched by concurrent requests.
Other Possible Causes
1) Mixing invoke() with fire-and-forget promises
If you start async work and don’t await it, Node can keep dangling handles alive and fail unpredictably under scale.
// ❌ Broken
void chain.invoke({ input }); // not awaited
res.status(200).send("ok");
// ✅ Fixed
const result = await chain.invoke({ input });
res.status(200).json(result);
2) Using Promise.all() against rate-limited LangChain calls without control
LangChain itself is fine with concurrency, but your provider may not be. Too many parallel calls can look like loop instability when the real issue is overload.
// ❌ Broken
await Promise.all(inputs.map((input) => chain.invoke({ input })));
// ✅ Fixed: limit concurrency
import pLimit from "p-limit";
const limit = pLimit(3);
await Promise.all(
inputs.map((input) => limit(() => chain.invoke({ input })))
);
3) Reusing a stream after it has closed
Streaming chains and callbacks are easy to misuse. If you read from an exhausted stream twice, you’ll get runtime failures that often appear only under load.
// ❌ Broken
const stream = await chain.stream({ input });
for await (const chunk of stream) console.log(chunk);
for await (const chunk of stream) console.log(chunk); // invalid reuse
// ✅ Fixed
const stream1 = await chain.stream({ input });
for await (const chunk of stream1) console.log(chunk);
// create a fresh stream for another consumer
const stream2 = await chain.stream({ input });
4) Serverless runtime mismatch
Some LangChain integrations assume Node APIs that don’t behave the same in edge runtimes. If your deployment target is Edge or Bun and your local test is Node.js, async resource handling can break.
// config hint for Next.js API route using Node runtime
export const runtime = "nodejs";
If you’re using @langchain/community integrations with filesystem access, sockets, or native modules, keep them on Node runtime unless you’ve verified compatibility.
How to Debug It
- •
Find the first real stack frame
- •Don’t stop at the top-level message.
- •Look for the first LangChain class involved:
- •
RunnableSequence - •
ChatOpenAI - •
ConversationalRetrievalQAChain - •
VectorStoreRetriever
- •
- •The first non-framework frame usually points to the bad lifecycle boundary.
- •
Remove concurrency
- •Replace
Promise.all()with one call. - •If the error disappears, you’ve got a shared-state or rate-limit problem.
- •Then reintroduce concurrency with a limiter like
p-limit.
- •Replace
- •
Move all construction inside the request path
- •Build the model, embeddings, retriever, and chain inside the handler.
- •If that fixes it, your bug was singleton reuse.
- •
Check runtime and adapter compatibility
- •Confirm whether you’re on:
- •Node.js serverless function
- •Edge runtime
- •long-lived worker process
- •Verify every package in your LangChain stack supports that environment.
- •Confirm whether you’re on:
Prevention
- •Keep LangChain objects request-scoped unless they are explicitly documented as safe singletons.
- •Put concurrency limits around batch processing and fan-out workflows.
- •Standardize on one runtime per service path:
- •Node for heavy LangChain integrations
- •Edge only when every dependency is verified compatible
The pattern here is simple: most “async event loop” failures are really lifecycle bugs disguised as scaling bugs. Once you stop sharing mutable async objects across requests and control concurrency explicitly, these errors usually disappear.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit