How to Fix 'invalid API key when scaling' in CrewAI (TypeScript)
When CrewAI says invalid API key when scaling, it usually means the worker or runtime that was spawned during a scale-out path did not receive the same API credentials as your local process. This shows up most often after moving from a single local run to Docker, a queue worker, serverless function, or any setup where the app is instantiated in a second process.
In TypeScript projects, the trap is usually configuration that works in one process but disappears in another. The result is a failure inside OpenAIChatModel, LLM, or an agent runner when CrewAI tries to call the provider with an empty or wrong key.
The Most Common Cause
The #1 cause is reading process.env too early, then constructing your CrewAI objects before the environment is actually loaded in the scaled process.
This happens a lot with dotenv, NestJS bootstrapping, Next.js server code, and worker entrypoints. The main process has the key; the scaled worker does not.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Env is read at module load time | Env is loaded before instantiation |
Key becomes undefined in workers | Key is resolved inside startup path |
| Works locally, fails when scaled | Works in both main and worker processes |
// broken.ts
import 'dotenv/config';
import { Agent } from '@crewai/core';
import { OpenAIChatModel } from '@crewai/llm';
const model = new OpenAIChatModel({
apiKey: process.env.OPENAI_API_KEY, // can be undefined in scaled workers
model: 'gpt-4o-mini',
});
export const supportAgent = new Agent({
role: 'Support Engineer',
goal: 'Answer customer questions',
backstory: 'Handles banking support requests',
llm: model,
});
// fixed.ts
import 'dotenv/config';
import { Agent } from '@crewai/core';
import { OpenAIChatModel } from '@crewai/llm';
function createSupportAgent() {
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error('Missing OPENAI_API_KEY');
}
const model = new OpenAIChatModel({
apiKey,
model: 'gpt-4o-mini',
});
return new Agent({
role: 'Support Engineer',
goal: 'Answer customer questions',
backstory: 'Handles banking support requests',
llm: model,
});
}
export { createSupportAgent };
The important change is that agent construction happens at runtime, after the process has its environment. If you export a singleton agent from a module, scaling can break because that module may be evaluated before env injection happens.
Other Possible Causes
1) Wrong environment variable name in one runtime
A very common bug is setting OPENAI_API_KEY locally but injecting CREWAI_OPENAI_API_KEY or API_KEY in your deployment config.
// broken
const apiKey = process.env.OPEN_AI_KEY; // typo
// fixed
const apiKey = process.env.OPENAI_API_KEY;
If you are using Docker:
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
2) Worker process does not inherit secrets
If you use BullMQ, RabbitMQ consumers, PM2 cluster mode, or Node worker threads, the child process may not inherit the env you think it does.
// broken
new Worker('./worker.js'); // no env passed explicitly
// fixed
new Worker('./worker.js', {
env: {
...process.env,
OPENAI_API_KEY: process.env.OPENAI_API_KEY!,
},
});
For queue workers, also check your deployment platform’s secret scope. A web container having access to secrets does not mean a background job container does.
3) Multiple providers mixed up
Some teams configure Anthropic or Azure OpenAI in one place and OpenAI in another. CrewAI then sends the request through the wrong provider client and you get an auth failure that looks like an invalid key.
// broken
const model = new OpenAIChatModel({
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'gpt-4o-mini',
});
// fixed
const model = new OpenAIChatModel({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
});
If you are using Azure OpenAI, make sure you are using the Azure-compatible client and endpoint settings instead of an OpenAI-only class.
4) Secret rotation broke long-running workers
If your app scales horizontally and keys are rotated, one worker may still hold an old key while another has the new one. That creates intermittent failures.
// broken pattern: cache client forever at module scope
const llmClient = new OpenAIChatModel({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
});
Prefer creating the client per job or reloading config on startup:
function buildLlm() {
return new OpenAIChatModel({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
});
}
How to Debug It
- •
Log the key presence, not the key value
- •In both your API server and worker entrypoint:
console.log('OPENAI_API_KEY present:', Boolean(process.env.OPENAI_API_KEY));- •If one prints
trueand the other printsfalse, you found it.
- •
Check where CrewAI objects are instantiated
- •Search for:
- •
new Agent(...) - •
new Crew(...) - •
new OpenAIChatModel(...) - •module-level exports of configured instances
- •
- •If these run at import time, move them into a factory function.
- •Search for:
- •
Confirm which runtime fails
- •Add logs around:
console.log('pid', process.pid); console.log('entrypoint', import.meta.url);- •If only scaled workers fail, it’s almost always env propagation or secret scope.
- •
Inspect the real error chain
- •You may see messages like:
- •
Error: Invalid API key provided - •
401 Unauthorized - •
CrewExecutionError - •
OpenAILLMError
- •
- •The top-level CrewAI error is often generic; the underlying provider error tells you whether it’s missing env, wrong provider, or stale credentials.
- •You may see messages like:
Prevention
- •Instantiate CrewAI agents and LLM clients inside startup functions or job handlers, not at module load time.
- •Validate required secrets before creating any crew objects:
if (!process.env.OPENAI_API_KEY) throw new Error('Missing OPENAI_API_KEY'); - •Make secret injection explicit for every runtime:
- •web server
- •background worker
- •queue consumer
- •serverless function
If this error only appears during scaling, treat it as a deployment/config problem first, not an LLM problem. In most TypeScript CrewAI setups, the code is fine locally; the worker simply never got the key.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit