How to Fix 'authentication failed' in CrewAI (TypeScript)
What the error means
authentication failed in CrewAI TypeScript usually means the SDK tried to call a model provider or backend service without valid credentials. It typically shows up when you first run an agent, invoke a task, or switch environments and your API key is missing, wrong, or not being loaded.
In practice, this is almost always a configuration problem, not an agent logic problem. The failure usually happens before your crew even starts doing useful work.
The Most Common Cause
The #1 cause is a missing or incorrectly loaded environment variable for the model provider. In CrewAI TypeScript, people often define the key in .env but never load it, or they use the wrong variable name for the provider they configured.
Here’s the broken pattern and the fixed pattern side by side:
| Broken | Fixed |
|---|---|
| ```ts | |
| import { Agent } from "crewai"; |
const agent = new Agent({ role: "Support Analyst", goal: "Answer user questions", backstory: "You help customers", llm: { provider: "openai", apiKey: process.env.OPENAI_KEY, // wrong env var model: "gpt-4o-mini", }, });
await agent.run("Check this ticket");
|ts
import "dotenv/config";
import { Agent } from "crewai";
const agent = new Agent({ role: "Support Analyst", goal: "Answer user questions", backstory: "You help customers", llm: { provider: "openai", apiKey: process.env.OPENAI_API_KEY, model: "gpt-4o-mini", }, });
await agent.run("Check this ticket");
If you’re using OpenAI, the SDK will usually fail with something like:
```txt
CrewAIError: authentication failed
or a lower-level provider error such as:
AuthenticationError: Incorrect API key provided
The key detail is that process.env.OPENAI_KEY is not what most setups expect. Use the exact variable name your provider integration expects, and make sure dotenv is loaded before you instantiate the agent.
Other Possible Causes
1) .env is correct locally, but not loaded in runtime
This happens when your app runs through tsx, node, Docker, or a test runner that never loads environment variables.
// broken
const apiKey = process.env.OPENAI_API_KEY;
// fixed
import "dotenv/config";
const apiKey = process.env.OPENAI_API_KEY;
If you run in Docker, also check that the container actually receives the env var:
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
2) Wrong provider selected for the key you passed
An OpenAI key will not authenticate against Anthropic, Azure OpenAI, or a local gateway expecting different headers.
// broken
new Agent({
llm: {
provider: "anthropic",
apiKey: process.env.OPENAI_API_KEY,
model: "claude-3-5-sonnet-latest",
},
});
// fixed
new Agent({
llm: {
provider: "openai",
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
},
});
If you’re using Azure OpenAI, don’t treat it like plain OpenAI. You usually need endpoint-specific config and deployment names, not just a raw model name.
3) Expired or revoked secret
This one looks identical at runtime. The code is fine, but the credential was rotated in your cloud console or secrets manager.
// config snippet
OPENAI_API_KEY=sk-live-old-key-that-no-longer-works
Fix by replacing it in your secrets store and redeploying. If your app reads from a secret manager at startup, restart the service after rotation.
4) Trailing spaces or quotes in secrets
I’ve seen this more times than I should have. The value looks valid in .env, but it includes hidden whitespace or copied quotes.
# broken
OPENAI_API_KEY="sk-proj-abc123 "
# fixed
OPENAI_API_KEY=sk-proj-abc123
If you suspect this, log only the length of the string, not the secret itself:
console.log("API key length:", process.env.OPENAI_API_KEY?.length);
5) Using a custom base URL without matching auth headers
If you point CrewAI at an internal gateway or proxy, authentication may fail because that service expects a different header format.
// broken example for a proxy expecting x-api-key instead of Authorization bearer token
llm: {
provider: "openai",
apiKey: process.env.INTERNAL_GATEWAY_KEY,
baseUrl: "https://llm-gateway.internal/v1",
}
In these setups, check whether CrewAI supports custom headers for your transport layer. If not, put an adapter in front of it that translates auth correctly.
How to Debug It
- •
Print whether the key exists at startup
console.log({ hasOpenAIApiKey: Boolean(process.env.OPENAI_API_KEY), keyLength: process.env.OPENAI_API_KEY?.length ?? 0, });If
falseor0, stop there. Your app never received credentials. - •
Confirm provider and model match
- •OpenAI key + OpenAI provider
- •Anthropic key + Anthropic provider
- •Azure endpoint + Azure-specific config
A mismatch often produces
CrewAIError: authentication failedeven though the real issue is provider selection. - •
Run the same credential outside CrewAI Test directly against the provider SDK or API with a tiny script. If that fails too, CrewAI is not the problem.
- •
Check deployment/runtime secrets
- •Local shell vs
.env - •Docker compose env injection
- •CI/CD secret names
- •Secret manager rotation
Most “works on my machine” cases come from one of these layers dropping the variable before Node starts.
- •Local shell vs
Prevention
- •Load environment variables explicitly at process startup:
import "dotenv/config"; - •Standardize secret names across repos:
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •
AZURE_OPENAI_API_KEY
- •
- •Add a boot-time assertion so bad configs fail fast:
if (!process.env.OPENAI_API_KEY) { throw new Error("Missing OPENAI_API_KEY"); }
If you’re seeing authentication failed in CrewAI TypeScript, start with credentials first. Nine times out of ten, that’s where the bug lives.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit