How to Fix 'invalid API key in production' in AutoGen (Python)
What this error means
If you see invalid API key in production while using AutoGen in Python, your agent is reaching the model provider with a key that the server rejects. In practice, this usually means the wrong environment variable is loaded, the key is malformed, or your deployment is picking up a stale local config instead of the production secret.
This often shows up when AssistantAgent, UserProxyAgent, or OpenAIWrapper starts making requests and the underlying OpenAI-compatible client returns a 401-style auth failure.
The Most Common Cause
The #1 cause is simple: your code works locally because .env is loaded, but production never gets that variable, or it gets the wrong one.
With AutoGen, this usually happens when you rely on implicit environment loading instead of explicitly wiring the key into your runtime.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
Reads from .env locally only | Loads production secret explicitly |
Assumes OPENAI_API_KEY exists everywhere | Validates the key before creating agents |
| Fails late inside AutoGen request flow | Fails early at startup |
# broken.py
import os
from autogen import AssistantAgent
# This works locally if .env is loaded by your shell,
# but fails in production if OPENAI_API_KEY is missing.
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
]
}
assistant = AssistantAgent(
name="assistant",
llm_config=llm_config,
)
# fixed.py
import os
from autogen import AssistantAgent
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY is missing in production")
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": api_key,
}
]
}
assistant = AssistantAgent(
name="assistant",
llm_config=llm_config,
)
If you’re using OpenAIWrapper, same rule applies: don’t assume the runtime has the right secret unless you checked it.
from autogen.oai.openai_utils import OpenAIWrapper
wrapper = OpenAIWrapper(
config_list=[{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}]
)
Other Possible Causes
1) You set the wrong variable name
A common mistake is setting API_KEY or OPENAI_KEY instead of OPENAI_API_KEY.
# broken
export OPENAI_KEY="sk-..."
# fixed
export OPENAI_API_KEY="sk-..."
AutoGen won’t magically map custom names unless your code does it.
2) Your production secret contains whitespace or quotes
Copy-pasting secrets from dashboards can add trailing spaces or quote characters. That produces an auth failure even though the value “looks” correct.
# broken
api_key = os.getenv("OPENAI_API_KEY") # may include whitespace from bad injection
# fixed
api_key = os.getenv("OPENAI_API_KEY", "").strip()
if not api_key:
raise RuntimeError("OPENAI_API_KEY is empty after stripping")
If you inject secrets through CI/CD, verify they are stored as raw values, not JSON strings with extra quotes.
3) You’re using a stale local .env file in production
This happens when Docker images or deployment bundles accidentally include an old .env. The app starts, reads a revoked key, and AutoGen fails on first request.
# broken
COPY . .
ENV PYTHONUNBUFFERED=1
That can accidentally package .env into the image.
# fixed
COPY app/ /app/
# do not bake secrets into images
ENV PYTHONUNBUFFERED=1
Use runtime secrets instead:
# example Kubernetes env injection
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: openai-secrets
key: api_key
4) Your config points to a different provider than you think
In AutoGen, config_list can route to OpenAI-compatible endpoints. If you point at Azure OpenAI, a proxy, or another gateway, an OpenAI key may be invalid for that endpoint.
# broken: key belongs to OpenAI, endpoint expects Azure credentials/configuration
config_list = [{
"model": "gpt-4o-mini",
"base_url": "https://my-azure-endpoint.openai.azure.com/",
"api_key": os.getenv("OPENAI_API_KEY"),
}]
For Azure-style setups, use the provider’s expected fields and auth model:
config_list = [{
"model": "gpt-4o-mini",
"base_url": os.getenv("AZURE_OPENAI_ENDPOINT"),
"api_key": os.getenv("AZURE_OPENAI_API_KEY"),
"api_type": "azure",
}]
If you’re behind an internal proxy, confirm whether it forwards Authorization: Bearer ... unchanged.
How to Debug It
- •
Print what AutoGen actually sees
- •Log the resolved env var at startup.
- •Don’t print the full secret; print length and prefix only.
api_key = os.getenv("OPENAI_API_KEY", "") print(f"OPENAI_API_KEY present={bool(api_key)}, len={len(api_key)}") - •
Check whether the failure happens before or inside AutoGen
- •If
AssistantAgentinitializes fine but requests fail later, it’s usually auth against the model endpoint. - •If your code crashes earlier with missing config, it’s an env loading issue.
- •If
- •
Test the same key outside AutoGen
- •Use a minimal direct client call with the same environment.
- •If direct calls fail too, the problem is not AutoGen; it’s credential or endpoint setup.
- •
Inspect deployment/runtime source of truth
- •In Docker: check container env vars.
- •In Kubernetes: check Secret mounts and rollout status.
- •In GitHub Actions: verify masked secrets are injected into runtime jobs, not just build steps.
Prevention
- •Validate required secrets at process startup.
- •Fail fast with a clear message before any agent runs.
- •Keep provider config explicit.
- •Set
api_key,base_url, and provider-specific fields in code or runtime config.
- •Set
- •Add a health check for model auth.
- •Run one minimal request during deploy so bad keys fail before traffic hits your agents.
If you’re building with AssistantAgent and seeing invalid API key in production, treat it as a deployment/config problem first. In most cases, AutoGen is just surfacing what your runtime already misconfigured.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit