How to Fix 'invalid API key in production' in CrewAI (TypeScript)
What this error actually means
invalid API key in production usually means CrewAI is not reading the key you think it is, or it’s reading a placeholder/empty value after deployment. In TypeScript projects, this shows up most often when local .env loading works, but the production runtime never gets the same environment variables.
You’ll typically see it when the agent first tries to call a model provider and CrewAI throws something like:
Error: invalid API key in production
or a provider-level auth failure such as:
OpenAIError: Incorrect API key provided
The Most Common Cause
The #1 cause is loading environment variables too late or hardcoding the API key into client-side/bundled code.
In CrewAI TypeScript apps, people often instantiate Agent, Task, or Crew before dotenv.config() runs, or they pass process.env.OPENAI_API_KEY from code that gets bundled into a serverless/client context where the variable is missing.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Environment loaded after imports/instantiation | Environment loaded before any CrewAI setup |
| Key read from code that may be bundled away | Key read only on server/runtime |
| No validation of required env vars | Fail fast if key is missing |
// broken.ts
import { Agent } from "crewai";
import dotenv from "dotenv";
const agent = new Agent({
role: "Support Engineer",
goal: "Answer customer questions",
backstory: "You are precise and helpful.",
llm: {
provider: "openai",
apiKey: process.env.OPENAI_API_KEY, // undefined in production if env isn't loaded yet
model: "gpt-4o-mini",
},
});
dotenv.config(); // too late
export { agent };
// fixed.ts
import dotenv from "dotenv";
dotenv.config();
import { Agent } from "crewai";
if (!process.env.OPENAI_API_KEY) {
throw new Error("Missing OPENAI_API_KEY");
}
const agent = new Agent({
role: "Support Engineer",
goal: "Answer customer questions",
backstory: "You are precise and helpful.",
llm: {
provider: "openai",
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
},
});
export { agent };
The important part is the order. If your app starts before the env file is loaded, CrewAI receives undefined, and the provider rejects it as an invalid key.
Other Possible Causes
1) Wrong environment variable name in production
Local works because .env has one name, while production uses another.
# local
OPENAI_API_KEY=sk-...
# production typo
OPEN_AI_KEY=sk-...
If your code expects process.env.OPENAI_API_KEY, this will fail every time in prod.
2) Secret injected into build-time instead of runtime
This is common in Next.js, Vite, and serverless builds. Build-time variables can disappear at runtime.
// bad for server runtime if this gets baked into a client bundle
const apiKey = import.meta.env.VITE_OPENAI_API_KEY;
Use server-only runtime access:
const apiKey = process.env.OPENAI_API_KEY;
For Next.js server code:
export const runtime = "nodejs";
const apiKey = process.env.OPENAI_API_KEY;
3) Using a public/client-prefixed variable name
Anything prefixed for client exposure is the wrong place for secrets.
NEXT_PUBLIC_OPENAI_API_KEY=sk-...
That variable is designed to be exposed to the browser. For CrewAI, keep secrets server-side only:
OPENAI_API_KEY=sk-...
4) Deploy platform secret not attached to the right service
Your platform may have the secret set globally but not on the actual worker/function running CrewAI.
Example config issue:
# docker-compose.yml
services:
api:
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
worker:
image: my-worker
# missing OPENAI_API_KEY here
If CrewAI runs in worker, it will fail even though api works.
5) Provider mismatch inside CrewAI config
Sometimes the key is valid, but you’re telling CrewAI to use the wrong provider wrapper.
new Agent({
role: "Analyst",
goal: "Summarize reports",
backstory: "...",
llm: {
provider: "anthropic", // but you're passing an OpenAI key
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
},
});
Match provider and key type:
llm: {
provider: "openai",
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
}
How to Debug It
- •
Print what CrewAI actually sees
console.log("OPENAI_API_KEY present?", !!process.env.OPENAI_API_KEY); console.log("key prefix:", process.env.OPENAI_API_KEY?.slice(0, 7));If this prints
falseorundefined, stop looking at CrewAI first. Your runtime config is broken. - •
Check where env loading happens Make sure
dotenv.config()runs before any imports that instantiate agents or crews. In ESM/TypeScript apps, import order matters more than people expect. - •
Verify deployment secrets on the exact runtime Check your Docker container, Lambda function, Vercel/Render/Fly service, or worker process. The secret must exist in the same execution context that creates
Agent,Task, orCrew. - •
Confirm provider/key pairing If you see errors like:
- •
OpenAIError: Incorrect API key provided - •
AuthenticationError
then compare:
- •provider name in your CrewAI config
- •actual secret value format
- •model/provider package being used
- •
Prevention
- •
Load and validate secrets at startup:
const required = ["OPENAI_API_KEY"] as const; for (const name of required) { if (!process.env[name]) throw new Error(`Missing ${name}`); } - •
Keep all CrewAI execution server-side. Never expose model keys through browser bundles or public env vars.
- •
Add a startup smoke test in CI/CD. Instantiate a minimal
Agentwith real env vars in staging before promoting to production.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit