How to Fix 'invalid API key when scaling' in AutoGen (Python)
When AutoGen throws invalid API key when scaling, it usually means the agent process that worked locally is now running in a different execution path, and that path does not have the same OpenAI credentials. This often shows up when you move from a single-agent script to GroupChat, AssistantAgent, or any setup where AutoGen spins up more than one model call.
In practice, this is almost always a configuration propagation problem, not an AutoGen bug. The key exists in your shell or notebook, but the scaled/parallel agent instance cannot see it.
The Most Common Cause
The #1 cause is setting the API key in one place, then creating agents without passing the config into every model-calling component.
With AutoGen, AssistantAgent and related classes do not magically inherit your local Python variable. They use the LLM config you pass through llm_config, config_list, or environment variables.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Key only exists in a local variable | Key is passed through config_list or env |
| One agent works, multi-agent scaling fails | Every agent gets the same model config |
| Notebook cell state hides the issue | Explicit config makes it reproducible |
# BROKEN
import os
from autogen import AssistantAgent, UserProxyAgent
api_key = os.getenv("OPENAI_API_KEY") # local variable only
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": [
{
"model": "gpt-4o-mini",
# missing api_key here
}
]
},
)
user = UserProxyAgent(name="user", code_execution_config=False)
user.initiate_chat(assistant, message="Write a summary.")
# FIXED
import os
from autogen import AssistantAgent, UserProxyAgent
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
]
}
assistant = AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user = UserProxyAgent(name="user", code_execution_config=False)
user.initiate_chat(assistant, message="Write a summary.")
If you are using multiple agents, copy the same llm_config into each one that calls an LLM. Do not assume one agent’s config will be reused by another.
Other Possible Causes
1. Environment variable is set in your shell but not in the runtime
This happens a lot with Docker, VS Code debug sessions, Celery workers, and Jupyter kernels.
# Shell has it
export OPENAI_API_KEY=sk-...
python app.py
But inside Docker or another worker process, that variable is missing.
# docker-compose.yml
services:
app:
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
If ${OPENAI_API_KEY} is empty on the host, your container gets nothing.
2. You are using the wrong client for the model endpoint
AutoGen can talk to OpenAI-compatible endpoints, Azure OpenAI, and local proxies. If you point base_url at a non-OpenAI endpoint but still send an OpenAI key format mismatch can trigger auth failures.
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
"base_url": "http://localhost:8000/v1",
}
]
}
If that endpoint expects its own token or no token at all, fix the provider settings:
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ["LOCAL_LLM_TOKEN"],
"base_url": "http://localhost:8000/v1",
}
]
}
3. Azure OpenAI variables are mixed with OpenAI variables
Azure OpenAI uses different fields. If you pass an OpenAI-style config into an Azure deployment, AutoGen may fail during request construction or auth.
# Azure-style config
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_type": "azure",
"api_key": os.environ["AZURE_OPENAI_API_KEY"],
"base_url": os.environ["AZURE_OPENAI_ENDPOINT"],
"api_version": "2024-02-15-preview",
}
]
}
Do not mix this with an OpenAI-only setup unless you know exactly which backend each agent is hitting.
4. A worker or subprocess loses inherited environment state
If you scale with multiprocessing or task queues, child processes may start without your loaded .env values.
from multiprocessing import Process
def run_agent():
# child process may not have OPENAI_API_KEY loaded
...
p = Process(target=run_agent)
p.start()
Load env vars before spawning workers:
from dotenv import load_dotenv
load_dotenv()
# then start workers/processes
How to Debug It
- •
Print the exact config each agent receives
- •Check
llm_config,config_list,api_key, andbase_url. - •If one agent differs from another, you found the bug.
- •Check
- •
Verify the runtime environment inside the failing process
- •Log
os.getenv("OPENAI_API_KEY")from inside the worker. - •Don’t trust your terminal; trust the process that fails.
- •Log
- •
Reduce to one agent and one call
- •Run a minimal
AssistantAgent+UserProxyAgentexample. - •If single-agent works and multi-agent fails, your scaling path is dropping config.
- •Run a minimal
- •
Inspect provider-specific settings
- •For Azure: check
api_type,base_url, andapi_version. - •For local/OpenAI-compatible servers: confirm whether they expect an API key at all.
- •For Azure: check
A useful pattern is to dump sanitized config before creating agents:
def mask(key):
return key[:4] + "..." if key else None
print({
"OPENAI_API_KEY": mask(os.getenv("OPENAI_API_KEY")),
})
print(llm_config)
That usually exposes missing keys faster than reading stack traces.
Prevention
- •Centralize LLM configuration in one function and reuse it across all AutoGen agents.
- •Load
.envfiles at process startup, before creating threads or workers. - •Add a startup check that fails fast if required keys are missing:
required = ["OPENAI_API_KEY"]
missing = [k for k in required if not os.getenv(k)]
if missing:
raise RuntimeError(f"Missing env vars: {missing}")
If you treat API credentials as part of agent wiring instead of global magic, this error stops showing up when you scale.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit