How to Fix 'async event loop error when scaling' in CrewAI (TypeScript)
When you see async event loop error when scaling in a CrewAI TypeScript app, it usually means you’re creating or reusing async resources in a way the runtime can’t safely expand across workers, tasks, or concurrent calls. In practice, it shows up when you move from one-off execution to parallel runs, queue-based processing, or server handlers that trigger multiple crews at once.
The root problem is usually not CrewAI itself. It’s almost always an event loop lifecycle issue: nested promises, shared clients, unawaited tasks, or code that was fine in a single-threaded test but breaks under concurrency.
The Most Common Cause
The #1 cause is running multiple Crew executions inside an already active async context without isolating each run. In TypeScript, this often happens in API handlers, cron jobs, or worker loops where you call crew.kickoff() repeatedly and assume the runtime will manage everything.
Here’s the broken pattern:
// Broken: shared execution path with unbounded concurrent kickoff calls
import { Crew } from "@crewai/typescript";
const crew = new Crew({
agents: [/* ... */],
tasks: [/* ... */],
});
export async function handler(req: Request) {
const items = await req.json();
// This can blow up under load with:
// "async event loop error when scaling"
// or related runtime errors around nested async execution.
const results = await Promise.all(
items.map(async (item) => {
return crew.kickoff({ input: item });
})
);
return Response.json({ results });
}
And here’s the safer pattern:
// Fixed: isolate each run and control concurrency
import { Crew } from "@crewai/typescript";
function createCrew() {
return new Crew({
agents: [/* ... */],
tasks: [/* ... */],
});
}
export async function handler(req: Request) {
const items = await req.json();
const results = [];
for (const item of items) {
const crew = createCrew();
const result = await crew.kickoff({ input: item });
results.push(result);
}
return Response.json({ results });
}
If you need parallelism, add a real concurrency limit instead of Promise.all on everything:
import pLimit from "p-limit";
const limit = pLimit(3);
const results = await Promise.all(
items.map((item) =>
limit(async () => {
const crew = createCrew();
return crew.kickoff({ input: item });
})
)
);
Other Possible Causes
1. Reusing one Crew instance across requests
A Crew object may carry state that should not be shared across concurrent requests. If one request mutates task context while another is still running, you get unpredictable async failures.
// Broken
const crew = new Crew(config);
app.post("/run", async (req, res) => {
const output = await crew.kickoff(req.body);
res.json(output);
});
// Fixed
app.post("/run", async (req, res) => {
const crew = new Crew(config);
const output = await crew.kickoff(req.body);
res.json(output);
});
2. Missing await on async task execution
This causes dangling promises and can surface as event loop errors when the process scales and exits early or overlaps work.
// Broken
const result = crew.kickoff({ input });
// later code assumes result is ready
console.log(result);
// Fixed
const result = await crew.kickoff({ input });
console.log(result);
3. Mixing Node timers, callbacks, and promise chains
If you wrap CrewAI execution inside callback APIs without proper promise handling, the runtime can end up with overlapping loops.
// Broken
setTimeout(() => {
crew.kickoff({ input }).then(console.log);
}, 0);
// Fixed
await new Promise((resolve) => setTimeout(resolve, 0));
const result = await crew.kickoff({ input });
console.log(result);
4. Worker/thread model mismatch
If you’re using worker_threads, serverless functions, or a process manager like PM2 with clustering, some async resources won’t cross boundaries cleanly. This often appears as repeated failures only after scaling beyond one process.
{
"exec_mode": "cluster",
"instances": "max"
}
If the error appears only in cluster mode, test with a single process first:
{
"exec_mode": "fork",
"instances": "1"
}
How to Debug It
- •
Reproduce with concurrency set to one
- •Run a single request.
- •Then run two.
- •If it fails only at
Promise.all, your problem is concurrency and shared state.
- •
Check whether
Crewis singleton-scoped- •Search for module-level instances like
const crew = new Crew(...). - •Move creation inside the request/job handler.
- •Search for module-level instances like
- •
Look for unawaited promises
- •Search for
.kickoff(withoutawait. - •Also check
.then()chains that are not returned from the parent function.
- •Search for
- •
Log execution boundaries
- •Add logs before and after every kickoff.
- •If you see overlapping runs on the same instance, that’s your bug.
Example diagnostic logging:
console.log("before kickoff", requestId);
const output = await crew.kickoff(input);
console.log("after kickoff", requestId);
If "before kickoff" appears multiple times before any "after kickoff", you’ve got uncontrolled parallelism.
Prevention
- •Create a fresh
Crewper request or per job unless the library explicitly documents safe reuse. - •Use bounded concurrency with tools like
p-limit, not rawPromise.allover large batches. - •Treat every CrewAI call like I/O: always
awaitit and keep its lifecycle local to the execution boundary.
If this error started after “scaling,” assume shared state first. In TypeScript systems using CrewAI, that’s usually where the bug lives.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit