How to Fix 'deployment crash' in AutoGen (Python)
If you’re seeing deployment crash in AutoGen, it usually means the agent runtime failed before it could complete a model call. In practice, this shows up when the LLM config is wrong, the provider name doesn’t match the installed package, or the runtime can’t reach the endpoint.
The failure often happens right after AssistantAgent initialization or on the first .run() / .generate_reply() call. The stack trace is usually noisy, but the root cause is almost always in your model client configuration.
The Most Common Cause
The #1 cause is a bad model configuration: wrong model, wrong api_type, missing API key, or using an Azure deployment name as if it were an OpenAI model name.
Here’s the broken pattern I see most often:
| Broken | Fixed |
|---|---|
| ```python | |
| from autogen_agentchat.agents import AssistantAgent | |
| from autogen_ext.models.openai import OpenAIChatCompletionClient |
client = OpenAIChatCompletionClient( model="gpt-4o-mini", api_key="", base_url="https://my-resource.openai.azure.com/", )
agent = AssistantAgent(
name="assistant",
model_client=client,
)
|python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient
client = AzureOpenAIChatCompletionClient( azure_deployment="gpt-4o-mini-prod", azure_endpoint="https://my-resource.openai.azure.com/", api_key=os.environ["AZURE_OPENAI_API_KEY"], api_version="2024-02-15-preview", )
agent = AssistantAgent( name="assistant", model_client=client, )
The key mistake is mixing OpenAI and Azure OpenAI semantics. In Azure, `azure_deployment` is the deployment name, not the base model name, and `base_url` is not a drop-in replacement for `azure_endpoint`.
A related error message looks like this:
```text
ValueError: deployment crash: Failed to create chat completion client
Or sometimes:
openai.NotFoundError: Error code: 404 - {'error': {'message': 'Deployment not found'}}
If you’re using AutoGen’s older config style, make sure your config_list matches the provider exactly:
config_list = [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
"api_type": "openai",
}
]
For Azure:
config_list = [
{
"model": "gpt-4o-mini-prod",
"api_key": os.environ["AZURE_OPENAI_API_KEY"],
"base_url": os.environ["AZURE_OPENAI_ENDPOINT"],
"api_type": "azure",
"api_version": "2024-02-15-preview",
}
]
Other Possible Causes
1) Missing or empty environment variables
A blank API key usually fails at runtime, not at import time.
# Broken
api_key = os.getenv("OPENAI_API_KEY") # returns None
client = OpenAIChatCompletionClient(model="gpt-4o-mini", api_key=api_key)
Fix it by failing fast:
api_key = os.environ["OPENAI_API_KEY"]
client = OpenAIChatCompletionClient(model="gpt-4o-mini", api_key=api_key)
2) Wrong package version mismatch
AutoGen has had breaking changes across package splits. If your imports don’t match your installed versions, you can get runtime failures that look like deployment issues.
pip show pyautogen autogen-agentchat autogen-ext openai
Bad mix:
pyautogen 0.2.x + autogen-agentchat 0.4.x + old openai package
Use a consistent set of versions and reinstall cleanly:
pip uninstall -y pyautogen autogen-agentchat autogen-ext openai
pip install -U autogen-agentchat autogen-ext openai
3) Invalid deployment name in Azure
Azure OpenAI does not accept the raw model name unless that’s also your deployment name.
# Broken: using model family name instead of Azure deployment name
AzureOpenAIChatCompletionClient(
azure_deployment="gpt-4o-mini",
azure_endpoint="https://my-resource.openai.azure.com/",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
)
If your portal says the deployment is named prod-chat, use that exact string:
AzureOpenAIChatCompletionClient(
azure_deployment="prod-chat",
azure_endpoint="https://my-resource.openai.azure.com/",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
)
4) Network or proxy blocking the request
If AutoGen can initialize but crashes on first request, check outbound connectivity.
httpx.ConnectError: [Errno 111] Connection refused
Or behind corporate proxy:
import httpx
transport = httpx.HTTPTransport(proxy="http://proxy.internal:8080")
If you’re using a custom client setup, make sure your proxy settings are passed through correctly.
How to Debug It
- •
Print the exact client config before creating the agent
- •Confirm
model,azure_deployment,base_url, and API key source. - •Look for empty strings or accidental whitespace.
- •Confirm
- •
Call the model client directly
- •Bypass the agent and test one completion request.
- •If this fails, the problem is not in
AssistantAgent.
response = await client.create(messages=[{"role": "user", "content": "ping"}]) print(response) - •
Check whether you’re using OpenAI vs Azure OpenAI
- •OpenAI uses
model+api_key. - •Azure uses
azure_deployment+azure_endpoint+api_version.
- •OpenAI uses
- •
Turn on verbose logging
- •Capture full stack traces from both AutoGen and HTTP clients.
- •The real error is often one layer below “deployment crash”.
import logging logging.basicConfig(level=logging.DEBUG)
Prevention
- •Keep provider-specific configs in separate files or classes.
- •Don’t reuse one dictionary for both OpenAI and Azure unless you validate fields explicitly.
- •Fail fast on startup.
- •Check env vars with
os.environ[...], notos.getenv(...), for required secrets.
- •Check env vars with
- •Pin compatible versions.
- •Lock
autogen-agentchat,autogen-ext, andopenaitogether inrequirements.txtor Poetry.
- •Lock
If you want a quick sanity check, start with one known-good direct model call before wiring up multi-agent orchestration. That saves time when “deployment crash” is really just a bad endpoint string or missing key.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit