How to Fix 'invalid API key' in AutoGen (Python)
What the error means
invalid API key in AutoGen usually means the underlying model client rejected the credential before any agent logic ran. You’ll typically see it when creating a ConversableAgent, AssistantAgent, or when the first LLM call is made through OpenAI-compatible config.
The key point: this is almost never an AutoGen “agent” bug. It’s usually a bad environment variable, wrong config field, or a mismatch between the model provider and the client settings.
The Most Common Cause
The #1 cause is passing the wrong key name or not wiring the key into llm_config correctly.
With AutoGen, people often set api_key directly in the wrong place, or they rely on OPENAI_API_KEY but forget that their process never loaded it.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
config_list has no valid key | config_list includes a valid api_key or uses env var correctly |
| Key is stored under the wrong env var | Key is read from os.environ["OPENAI_API_KEY"] |
| Model config points to provider but not credentials | Provider and credential match |
# BROKEN
from autogen import AssistantAgent
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
"base_url": "https://api.openai.com/v1",
# missing api_key
}
]
}
)
# This often ends with:
# openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: ...'}}
# FIXED
import os
from autogen import AssistantAgent
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
"base_url": "https://api.openai.com/v1",
}
]
}
)
If you’re using .env, make sure it’s actually loaded before AutoGen reads it:
from dotenv import load_dotenv
load_dotenv()
A lot of “invalid API key” reports are just “the variable was empty.”
Other Possible Causes
1) You’re using the wrong provider key for the model endpoint
An OpenAI key will not work against Azure OpenAI unless you configure Azure-specific fields. Same story for Anthropic, Groq, Ollama, and other providers behind an OpenAI-compatible wrapper.
# WRONG: OpenAI-style config pointed at Azure endpoint
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
"base_url": "https://my-azure-resource.openai.azure.com/"
}
For Azure OpenAI, use Azure settings expected by your AutoGen/OpenAI client version:
# RIGHT: Azure-specific setup
{
"model": "gpt-4o-mini",
"api_key": os.environ["AZURE_OPENAI_API_KEY"],
"base_url": os.environ["AZURE_OPENAI_ENDPOINT"],
"api_type": "azure",
}
2) The key has whitespace or quotes in your environment file
This happens when copying keys from a portal into .env with extra spaces or wrapping quotes that get included literally.
# BAD
OPENAI_API_KEY=" sk-proj-abc123 "
# GOOD
OPENAI_API_KEY=sk-proj-abc123
If you want to be defensive:
api_key = os.getenv("OPENAI_API_KEY", "").strip()
3) Your installed package versions are mismatched
AutoGen has gone through API changes, and older examples online may not match your installed version. A stale openai package can also break authentication behavior.
Check versions:
pip show pyautogen openai autogen-agentchat
Typical fix:
pip install -U pyautogen openai python-dotenv
If you see errors like:
- •
openai.AuthenticationError: Incorrect API key provided - •
TypeError: Client.__init__() got an unexpected keyword argument 'api_key' - •
ValueError: The api_key client option must be set either by passing api_key...
you probably have a version mismatch or incorrect config shape.
4) You set the key in one process, but AutoGen runs in another
This shows up in notebooks, Docker containers, CI jobs, and subprocess-based workers. Your shell has the variable, but the Python runtime does not.
Check inside Python:
import os
print(os.getenv("OPENAI_API_KEY"))
If that prints None, your app never received the secret.
For Docker:
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
For GitHub Actions:
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
How to Debug It
- •
Print the resolved key source
- •Confirm whether AutoGen is reading from env vars or hardcoded config.
- •Check length only; do not log full secrets.
import os key = os.getenv("OPENAI_API_KEY") print("key present:", bool(key), "length:", len(key) if key else 0) - •
Inspect the exact model config passed to AutoGen
- •Verify
model,base_url, andapi_key. - •Make sure there isn’t a typo like
apikey,apiKey, oropenai_api_key.
- •Verify
- •
Call the provider directly outside AutoGen
- •If raw OpenAI client auth fails, AutoGen is not the problem.
- •Example:
from openai import OpenAI client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) print(client.models.list()) - •
Check whether you’re hitting a provider mismatch
- •OpenAI key + Azure endpoint = failure.
- •Azure key + OpenAI endpoint = failure.
- •Anthropic/Groq/local endpoint needs its own compatible configuration.
Prevention
- •Keep secrets in environment variables and load them explicitly with
python-dotenvin local dev. - •Centralize model configuration in one function so every agent uses the same validated settings.
- •Add a startup check that fails fast if the API key is missing or empty.
def build_llm_config():
import os
api_key = os.getenv("OPENAI_API_KEY", "").strip()
if not api_key:
raise RuntimeError("OPENAI_API_KEY is missing")
return {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": api_key,
}
]
}
If you’re seeing openai.AuthenticationError inside AutoGen, treat it like a credentials wiring issue first. In practice, that fixes most cases before you ever need to touch agent logic.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit