How to Fix 'authentication failed' in CrewAI (Python)
If you’re seeing authentication failed in CrewAI, it usually means one of the underlying model providers rejected your API credentials. In practice, this shows up when CrewAI tries to call OpenAI, Anthropic, Azure OpenAI, or another LLM backend with a missing, invalid, or misrouted key.
This is rarely a CrewAI bug. It’s almost always an auth/config problem in your Python code, environment variables, or provider settings.
The Most Common Cause
The #1 cause is passing the wrong API key to the wrong provider, or not loading the key at all.
With CrewAI, this often happens when you instantiate LLM or Agent with a model name that expects one provider, but your environment contains credentials for another. You’ll usually see errors like:
- •
openai.AuthenticationError: Incorrect API key provided - •
anthropic.AuthenticationError: authentication failed - •
litellm.AuthenticationError: Authentication Error - •
crewai.llm.LLMError: authentication failed
Broken vs fixed pattern
| Broken code | Fixed code |
|---|---|
| ```python | |
| import os | |
| from crewai import Agent | |
| from crewai.llm import LLM |
OPENAI_API_KEY is missing or wrong
llm = LLM(model="gpt-4o")
agent = Agent(
role="Analyst",
goal="Summarize customer complaints",
backstory="You analyze support tickets.",
llm=llm,
)
|python
import os
from crewai import Agent
from crewai.llm import LLM
Make sure this is set in your shell or .env
export OPENAI_API_KEY="sk-..."
assert os.getenv("OPENAI_API_KEY"), "OPENAI_API_KEY is not set"
llm = LLM( model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"], )
agent = Agent( role="Analyst", goal="Summarize customer complaints", backstory="You analyze support tickets.", llm=llm, )
If you’re using `.env`, make sure it’s actually loaded before CrewAI initializes the model:
```python
from dotenv import load_dotenv
load_dotenv()
from crewai.llm import LLM
llm = LLM(model="gpt-4o")
If you skip load_dotenv(), your local shell variables may exist in one terminal but not in the process running Python.
Other Possible Causes
1) Wrong provider for the model name
A common failure is using a model string that belongs to one provider while your credentials belong to another.
from crewai.llm import LLM
# Wrong if you're only configured for OpenAI
llm = LLM(model="claude-3-opus-20240229")
Fix it by matching the model to the provider and setting the right key:
from crewai.llm import LLM
llm = LLM(
model="claude-3-opus-20240229",
api_key=os.environ["ANTHROPIC_API_KEY"],
)
2) Environment variable name mismatch
CrewAI won’t guess that MY_OPENAI_KEY means OpenAI auth. If your code expects OPENAI_API_KEY, use that exact name.
# Broken
export MY_OPENAI_KEY="sk-..."
# Fixed
export OPENAI_API_KEY="sk-..."
If you’re using Azure OpenAI, don’t confuse standard OpenAI variables with Azure ones:
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_VERSION="2024-02-15-preview"
3) Key is present but invalid or revoked
A key can exist and still fail authentication if it was rotated, revoked, copied incorrectly, or expired.
import os
print(repr(os.getenv("OPENAI_API_KEY")))
Look for:
- •extra spaces
- •newline characters from copy/paste
- •truncated values
- •old keys from a previous environment
A bad key often produces provider-specific messages like:
- •
Incorrect API key provided - •
You didn't provide an API key - •
authentication failed: invalid token
4) Mixing local and container environments
Your laptop may have the right env vars, but Docker, CI/CD, or a notebook kernel may not.
# docker-compose.yml
services:
app:
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
If ${OPENAI_API_KEY} is empty on the host machine when Compose starts, the container gets nothing. Same issue happens in GitHub Actions and Kubernetes secrets if they’re not wired correctly.
How to Debug It
- •
Print the resolved environment variables
- •Check what Python actually sees.
- •Use
repr()so you catch hidden whitespace.
import os print("OPENAI_API_KEY =", repr(os.getenv("OPENAI_API_KEY"))) print("ANTHROPIC_API_KEY =", repr(os.getenv("ANTHROPIC_API_KEY"))) - •
Confirm which provider CrewAI is calling
- •Inspect your
LLM(model=...)value. - •Match it to the correct vendor:
- •
gpt-*→ OpenAI/Azure OpenAI - •
claude-*→ Anthropic - •other hosted models may require different config
- •
- •Inspect your
- •
Test the provider directly outside CrewAI
- •If direct SDK auth fails, CrewAI will fail too.
from openai import OpenAI client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) print(client.models.list())If this throws an auth error, fix credentials first.
- •
Turn on verbose logs
- •CrewAI sits on top of model calls; you want the underlying exception.
from crewai import Agent agent = Agent(...) # Run with logging enabled in your app / notebook setup.Also check full stack traces for classes like:
- •
openai.AuthenticationError - •
anthropic.AuthenticationError - •
litellm.AuthenticationError
Prevention
- •Keep provider keys in a real secret manager or
.env, never hardcoded in source control. - •Add startup checks that fail fast if required env vars are missing.
- •Pin and document which provider each agent uses so nobody swaps model names without updating credentials.
If you want fewer surprises, make auth validation part of app startup instead of waiting for the first agent run to explode.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit