How to Fix 'authentication failed' in AutoGen (Python)
When AutoGen throws authentication failed, it usually means the LLM client never got a valid API credential, or it got one that the provider rejected. You’ll see this when creating an AssistantAgent, UserProxyAgent, or when the first model call happens and the OpenAI-compatible client tries to authenticate.
In practice, this shows up as a startup failure, a 401 from the provider, or a stack trace ending in something like:
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided'}}
The Most Common Cause
The #1 cause is simple: the API key is missing, misnamed, or loaded too late.
With AutoGen, people often set OPENAI_API_KEY after importing or instantiating agents, or they pass the wrong config key into llm_config. AutoGen then builds the underlying OpenAI client without valid credentials.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Key set after agent creation | Key set before client/agent creation |
| Wrong config field name | Correct environment variable or config dict |
Empty config_list entry | Valid model + api_key pair |
# BROKEN
from autogen import AssistantAgent
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": "", # empty key -> authentication failed
}
]
},
)
# setting it too late
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
# FIXED
import os
from autogen import AssistantAgent
os.environ["OPENAI_API_KEY"] = "sk-..." # set before agent/client creation
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
# let AutoGen/OpenAI client read OPENAI_API_KEY from env
}
]
},
)
If you’re using explicit credentials, make sure the key is actually present in the config object:
# FIXED WITH EXPLICIT KEY
from autogen import AssistantAgent
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": "sk-your-real-key",
}
]
},
)
Other Possible Causes
1) Wrong environment variable name
AutoGen’s OpenAI-compatible path expects OPENAI_API_KEY. If you set API_KEY, OPENAI_TOKEN, or some app-specific secret name, the client won’t pick it up.
# wrong
export API_KEY=sk-...
# right
export OPENAI_API_KEY=sk-...
2) Invalid base URL for Azure OpenAI or a proxy
If you’re using Azure OpenAI, LiteLLM, or an internal gateway, the endpoint and auth scheme must match the provider. A bad base_url can produce auth failures even when the key is valid.
# example of a bad gateway config
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
"base_url": "https://wrong-host.example.com/v1",
}
]
}
For Azure-style setups, use the provider’s expected fields:
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_type": "azure",
"api_key": os.environ["AZURE_OPENAI_API_KEY"],
"base_url": os.environ["AZURE_OPENAI_ENDPOINT"],
"api_version": "2024-02-15-preview",
}
]
}
3) Expired, revoked, or rotated key
A valid-looking key can still fail if it was rotated in your secret store or revoked in the provider dashboard. This usually produces a clean 401 from OpenAI-compatible APIs.
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided'}}
If your app runs in CI/CD or containers, check whether it’s pulling an old secret version.
4) Mixing providers with one config list
AutoGen lets you define multiple configs in config_list. If one entry points to a different provider with incompatible auth fields, AutoGen may try that entry first and fail before falling back.
llm_config = {
"config_list": [
{"model": "gpt-4o-mini", "api_key": None},
{"model": "claude-3-opus", "api_key": os.environ["OPENAI_API_KEY"]}, # wrong provider/key pairing
]
}
Keep each provider isolated and explicit.
How to Debug It
- •Print the resolved credentials before creating agents
- •Verify the environment variable exists and is non-empty.
- •Check for whitespace or newline issues from secret managers.
import os
print(repr(os.getenv("OPENAI_API_KEY")))
- •
Reduce to one agent and one model
- •Remove extra agents, tools, and multi-config fallbacks.
- •Test with a single
AssistantAgentand one known-good model.
- •
Inspect the exact exception
- •If you see
openai.AuthenticationError, it’s almost always credential-related. - •If you see connection errors plus auth failures, suspect a bad proxy/base URL.
- •If you see
- •
Call the provider directly outside AutoGen
- •Use a minimal OpenAI SDK request with the same env vars.
- •If that fails too, AutoGen is not the problem.
from openai import OpenAI
client = OpenAI()
print(client.models.list())
If this fails with 401, fix your credentials first. If it works but AutoGen fails, inspect your llm_config shape and provider-specific fields.
Prevention
- •Load secrets once at process startup and fail fast if they’re missing.
- •Keep one config template per provider; don’t reuse OpenAI fields for Azure or proxies.
- •Add a startup check that validates:
- •env var exists,
- •model name is correct,
- •base URL matches provider,
- •API key length looks sane.
A small guard saves time:
import os
key = os.getenv("OPENAI_API_KEY")
if not key:
raise RuntimeError("OPENAI_API_KEY is not set")
if not key.startswith("sk-"):
raise RuntimeError("OPENAI_API_KEY looks invalid")
If you treat authentication as configuration validation instead of runtime debugging, this error stops being mysterious very quickly.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit