How to Fix 'invalid API key' in CrewAI (TypeScript)
What the error means
invalid API key in CrewAI usually means the SDK reached the provider, but the credential it sent was empty, malformed, or not accepted by that provider. In TypeScript projects, this usually shows up when you instantiate OpenAIChatLLM, AnthropicChatLLM, or another model wrapper and the key is coming from the wrong env var, wrong runtime, or wrong file.
The failure often appears at startup or on the first agent run, with errors like:
- •
Error: 401 Unauthorized - •
Invalid API key provided - •
OpenAIError: Incorrect API key provided - •
AnthropicError: invalid x-api-key
The Most Common Cause
The #1 cause is simple: your environment variable is not loaded in the Node process that runs CrewAI.
In TypeScript, people often set .env correctly but forget to load it before constructing the LLM. The result is process.env.OPENAI_API_KEY being undefined, which then gets passed into CrewAI.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Reads env before loading dotenv | Loads env first |
| Passes possibly undefined key | Validates key exists |
| Hides failure until runtime | Fails fast |
// broken.ts
import { OpenAIChatLLM } from "@crewai/llms";
const llm = new OpenAIChatLLM({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
console.log(await llm.generate("Hello"));
// fixed.ts
import "dotenv/config";
import { OpenAIChatLLM } from "@crewai/llms";
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("OPENAI_API_KEY is missing");
}
const llm = new OpenAIChatLLM({
apiKey,
model: "gpt-4o-mini",
});
console.log(await llm.generate("Hello"));
If you’re running with tsx, node, Vitest, Jest, or a Docker container, this matters even more. A .env file on disk does nothing unless the runtime loads it.
Other Possible Causes
1. Wrong environment variable name
CrewAI wrappers do not guess your secret name. If your code expects OPENAI_API_KEY but your shell exports OPEN_AI_KEY, you’ll get a 401-style failure.
// broken
const llm = new OpenAIChatLLM({
apiKey: process.env.OPEN_AI_KEY,
});
// fixed
const llm = new OpenAIChatLLM({
apiKey: process.env.OPENAI_API_KEY,
});
Common provider names:
- •OpenAI:
OPENAI_API_KEY - •Anthropic:
ANTHROPIC_API_KEY - •Azure OpenAI: often requires endpoint + deployment config, not just a key
2. Quoted or malformed secret in .env
A key copied with extra quotes or spaces can fail validation at the provider boundary.
# broken
OPENAI_API_KEY="sk-proj-abc123 "
# fixed
OPENAI_API_KEY=sk-proj-abc123
If you’re reading from CI secrets, watch for trailing newline characters too. That’s common when secrets are pasted from terminals or exported from scripts.
3. Using the wrong provider class for the key
An OpenAI key will not work with an Anthropic client, and vice versa. In CrewAI TypeScript projects this happens when someone swaps models but leaves the old secret in place.
// broken
import { AnthropicChatLLM } from "@crewai/llms";
const llm = new AnthropicChatLLM({
apiKey: process.env.OPENAI_API_KEY,
});
// fixed
import { AnthropicChatLLM } from "@crewai/llms";
const llm = new AnthropicChatLLM({
apiKey: process.env.ANTHROPIC_API_KEY,
});
If you see messages like:
- •
AnthropicError: invalid x-api-key - •
OpenAIError: Incorrect API key provided
that usually points to provider mismatch, not just a missing secret.
4. Runtime mismatch in Docker or serverless
Your local machine may have the env var, but the container or deployment target does not. In Docker Compose, for example:
# broken
services:
app:
image: my-crewai-app
# fixed
services:
app:
image: my-crewai-app
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
Same issue in serverless platforms if you forgot to add the secret in project settings. The code works locally and fails only after deploy.
How to Debug It
- •
Print the value shape, not the full secret
- •Confirm it exists and has a sane length.
console.log("API key present:", Boolean(process.env.OPENAI_API_KEY)); console.log("API key length:", process.env.OPENAI_API_KEY?.length); - •
Check which provider class you are using
- •If your code uses
OpenAIChatLLM, use an OpenAI-compatible key. - •If it uses
AnthropicChatLLM, use an Anthropic key. - •Don’t reuse one provider’s secret for another client.
- •If your code uses
- •
Verify
.envloading happens before imports that read env- •In some setups, module initialization happens early.
- •Put
import "dotenv/config";at the entrypoint, not deep inside helper files.
- •
Run a minimal repro outside CrewAI
- •Call the same provider directly with the same env var.
- •If direct SDK auth fails, CrewAI is not the problem.
- •If direct SDK works and CrewAI fails, inspect how you pass config into agents and tools.
Prevention
- •Load env vars once at process startup with
import "dotenv/config";. - •Validate required secrets before creating any CrewAI LLM instance.
- •Keep provider-specific keys named consistently across local dev, Docker, and CI:
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •no custom aliases unless you map them explicitly
- •
If you’re building agent workflows for production systems, treat API keys like any other dependency: validate early, fail fast, and never assume your runtime inherited shell state correctly.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit