How to Fix 'invalid API key' in CrewAI (Python)
When CrewAI throws invalid API key, it usually means one of the underlying LLM providers rejected the credentials passed from your Python process. In practice, this shows up when OpenAI, Anthropic, or another provider-backed model is initialized with a missing, malformed, or mismatched key.
The error often appears during agent execution, not at import time. That makes it annoying to debug because your Crew looks fine until the first LLM call fails with something like AuthenticationError: Incorrect API key provided or openai.error.AuthenticationError.
The Most Common Cause
The #1 cause is simple: the API key is not actually loaded into the environment that CrewAI runs in.
A lot of people set OPENAI_API_KEY in one terminal, then run Python from another shell, IDE, Docker container, or notebook where that variable does not exist. CrewAI does not magically discover keys outside the current process environment.
Wrong vs right
| Broken pattern | Fixed pattern |
|---|---|
Hardcoding a fake placeholder or forgetting to load .env | Load env vars before creating agents and verify them |
| Setting env vars in the wrong shell/session | Set them in the same runtime process |
# broken.py
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o") # expects OPENAI_API_KEY in env
agent = Agent(
role="Researcher",
goal="Research market data",
backstory="You are precise.",
llm=llm,
)
task = Task(
description="Summarize the market.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
print(crew.kickoff())
# fixed.py
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY is missing")
llm = ChatOpenAI(
model="gpt-4o",
api_key=api_key,
)
agent = Agent(
role="Researcher",
goal="Research market data",
backstory="You are precise.",
llm=llm,
)
task = Task(
description="Summarize the market.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
print(crew.kickoff())
If you are using .env, make sure it contains a real key:
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxx
Other Possible Causes
1. Wrong provider key for the model you selected
A very common mistake is passing an Anthropic key to an OpenAI model wrapper, or vice versa.
# wrong
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key=os.getenv("ANTHROPIC_API_KEY"), # wrong provider key
)
# right
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key=os.getenv("OPENAI_API_KEY"),
)
If you are using Claude models, use the Anthropic wrapper and ANTHROPIC_API_KEY.
2. The key has extra whitespace or quotes
This happens a lot when copying from .env, CI secrets, or shell exports.
# bad
OPENAI_API_KEY=" sk-proj-abc123 "
Use this instead:
# good
OPENAI_API_KEY=sk-proj-abc123
If you suspect whitespace, strip it in code:
api_key = os.getenv("OPENAI_API_KEY", "").strip()
3. Your IDE or notebook is running a different environment
VS Code, PyCharm, Jupyter, and Docker often use a different interpreter than your terminal. The code looks correct but the process never sees the secret.
import os
print(os.getenv("OPENAI_API_KEY"))
If that prints None inside your notebook but works in your shell, you found the problem.
For Docker:
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
Or pass it at runtime:
docker run -e OPENAI_API_KEY="$OPENAI_API_KEY" my-image
4. You are using an outdated package combination
CrewAI sits on top of provider SDKs and LangChain integrations. A version mismatch can surface as auth failures even when the key is valid.
Check versions:
pip show crewai langchain-openai openai anthropic python-dotenv
Update together:
pip install -U crewai langchain-openai openai anthropic python-dotenv
If you pinned old versions months ago, re-check the provider wrapper syntax too. Some constructors changed over time.
How to Debug It
- •
Print what your process actually sees
import os print(repr(os.getenv("OPENAI_API_KEY")))If it prints
None, empty string, or a value with spaces/newlines, fix env loading first. - •
Call the provider directly before CrewAI
from openai import OpenAI client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) print(client.models.list())If this fails with
AuthenticationError: Incorrect API key provided, CrewAI is not the issue. - •
Check which model wrapper you are using
- •
ChatOpenAIexpectsOPENAI_API_KEY - •Anthropic wrappers expect
ANTHROPIC_API_KEY - •Azure OpenAI needs Azure-specific config, not just a raw OpenAI key
- •
- •
Turn on verbose logging In many setups this helps expose where auth breaks:
crew = Crew(agents=[agent], tasks=[task], verbose=True)Look for messages around provider initialization and the exact exception type:
- •
openai.AuthenticationError - •
anthropic.AuthenticationError - •HTTP 401 responses from upstream APIs
- •
Prevention
- •Load secrets explicitly at startup with
python-dotenvor your deployment platform’s secret manager. - •Validate required env vars before creating any
Agent,Task, or LLM client. - •Keep provider SDK versions aligned with CrewAI and pin them in
requirements.txt.
A good pattern is to fail fast:
required_vars = ["OPENAI_API_KEY"]
missing = [var for var in required_vars if not os.getenv(var)]
if missing:
raise RuntimeError(f"Missing required environment variables: {', '.join(missing)}")
That saves you from discovering auth issues only after a long agent run.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit