How to Fix 'invalid API key during development' in CrewAI (Python)
When CrewAI throws invalid API key during development, it usually means your app is loading the wrong credential, or no credential at all, before the first LLM call. In practice, this shows up when you run a local script, a notebook, or a FastAPI app and the provider rejects the key before CrewAI can create an agent response.
The error is rarely about CrewAI itself. It’s almost always environment loading, provider mismatch, or a bad .env setup.
The Most Common Cause
The #1 cause is simple: you created the LLM, Agent, or Crew before your environment variables were loaded.
CrewAI reads model credentials from the process environment. If OPENAI_API_KEY, ANTHROPIC_API_KEY, or your provider-specific key is missing at import time or object construction time, you’ll get errors like:
- •
AuthenticationError: Invalid API key provided - •
litellm.AuthenticationError - •
openai.AuthenticationError - •
BadRequestError: Error code: 401
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
Load .env after creating agents | Load .env before importing or instantiating CrewAI objects |
| Assume env vars exist in the shell | Explicitly verify them in Python |
| Build agents at module import time | Build them after config is loaded |
# broken.py
from crewai import Agent, Task, Crew
from crewai.llm import LLM
from dotenv import load_dotenv
llm = LLM(model="gpt-4o") # API key not loaded yet
agent = Agent(
role="Researcher",
goal="Find facts",
backstory="You research data.",
llm=llm,
)
load_dotenv() # too late
crew = Crew(agents=[agent], tasks=[])
# fixed.py
from dotenv import load_dotenv
load_dotenv()
import os
from crewai import Agent, Task, Crew
from crewai.llm import LLM
assert os.getenv("OPENAI_API_KEY"), "OPENAI_API_KEY is missing"
llm = LLM(model="gpt-4o")
agent = Agent(
role="Researcher",
goal="Find facts",
backstory="You research data.",
llm=llm,
)
crew = Crew(agents=[agent], tasks=[])
If you’re using .env, make sure it contains the exact variable name your provider expects:
OPENAI_API_KEY=sk-...
For Anthropic:
ANTHROPIC_API_KEY=sk-ant-...
For Azure OpenAI or other providers via LiteLLM/CrewAI routing, the naming can differ depending on your config.
Other Possible Causes
1. Wrong model provider for the key you set
You can’t pass an OpenAI key and point the model at Anthropic, or vice versa.
# broken
llm = LLM(model="claude-3-opus-20240229") # but only OPENAI_API_KEY is set
# fixed
llm = LLM(model="gpt-4o") # with OPENAI_API_KEY set
# or:
# llm = LLM(model="claude-3-opus-20240229") with ANTHROPIC_API_KEY set
2. Key exists in .env but never gets loaded in your runtime
This happens a lot in notebooks, Docker containers, and FastAPI/Uvicorn apps.
# broken
from crewai import Agent
# no load_dotenv(), no exported env vars in shell
# fixed
from dotenv import load_dotenv
load_dotenv()
In Docker:
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
Or better, pass it at runtime:
docker run -e OPENAI_API_KEY="$OPENAI_API_KEY" my-app
3. You set the wrong variable name
A typo like OPEN_AI_KEY won’t be picked up by CrewAI or LiteLLM.
# broken .env
OPEN_AI_KEY=sk-...
# fixed .env
OPENAI_API_KEY=sk-...
Also check for invisible issues:
- •trailing spaces after the value
- •quotes copied from docs incorrectly
- •commented-out lines in
.env
4. Your app overwrote the key at runtime
Some config code sets an empty string later and silently replaces the real value.
import os
os.environ["OPENAI_API_KEY"] = "" # wipes out valid key from shell
from crewai.llm import LLM
llm = LLM(model="gpt-4o")
Fix it by only reading config once and never mutating secrets casually:
import os
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY missing")
from crewai.llm import LLM
llm = LLM(model="gpt-4o")
How to Debug It
- •
Print what Python actually sees
import os print("OPENAI_API_KEY:", repr(os.getenv("OPENAI_API_KEY"))) print("ANTHROPIC_API_KEY:", repr(os.getenv("ANTHROPIC_API_KEY")))If you see
Noneor'', the issue is environment loading. - •
Confirm which model/provider CrewAI is calling Check your
LLM(model=...)value and make sure it matches the key you set.- •
gpt-*→ usually OpenAI key - •
claude-*→ usually Anthropic key
- •
- •
Run a minimal repro outside your app Strip everything down to one file:
from dotenv import load_dotenv; load_dotenv() from crewai.llm import LLM llm = LLM(model="gpt-4o") print(llm)If this fails too, your problem is config, not agent logic.
- •
Check for startup order issues If you create agents at module scope and then load config later, move all CrewAI initialization into a function:
def build_crew(): from dotenv import load_dotenv load_dotenv() ... return crew
Prevention
- •Load
.envbefore creating anyAgent,Task,Crew, orLLMobjects. - •Validate required secrets at startup with explicit checks like
assert os.getenv("OPENAI_API_KEY"). - •Keep provider choice and credential source aligned: OpenAI model names with OpenAI keys, Anthropic models with Anthropic keys.
- •Avoid module-level side effects in production code; build crews inside functions or app startup hooks.
If you’re still seeing invalid API key during development, look at the exact stack trace line where CrewAI fails. In most cases it points straight to either missing env vars, wrong provider selection, or initialization order.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit