How to Fix 'connection timeout' in AutoGen (Python)
What the error means
connection timeout in AutoGen usually means the Python process tried to reach an LLM endpoint, tool endpoint, or internal service, but nothing responded before the timeout window expired. In practice, this shows up when you create an AssistantAgent, OpenAIWrapper, or OpenAIChatCompletionClient and the underlying network call hangs or is blocked.
You’ll typically see it during agent initialization, the first initiate_chat(), or when a tool call triggers an external API request.
The Most Common Cause
The #1 cause is a bad model endpoint configuration: wrong base URL, wrong API key, or pointing AutoGen at a local server that is not actually running.
This is common with autogen and autogen-agentchat setups because the code looks fine, but the client is waiting on a dead endpoint.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Points to an invalid host or port | Points to a live OpenAI-compatible endpoint |
| Uses placeholder credentials | Uses real credentials from environment variables |
| No timeout handling | Explicit timeout + connectivity check |
# BROKEN: invalid base_url / dead endpoint
import os
from autogen import AssistantAgent
config_list = [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY", "sk-xxxxx"),
"base_url": "http://localhost:8000/v1", # nothing is listening here
}
]
assistant = AssistantAgent(
name="assistant",
llm_config={"config_list": config_list},
)
# FIXED: valid OpenAI-compatible endpoint and explicit env vars
import os
from autogen import AssistantAgent
config_list = [
{
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
# Remove base_url unless you are using a local proxy/server
# "base_url": "http://localhost:8000/v1",
}
]
assistant = AssistantAgent(
name="assistant",
llm_config={
"config_list": config_list,
"timeout": 60,
},
)
If you are using a local model server like vLLM, Ollama, LM Studio, or Azure OpenAI behind a proxy, verify that the server is up and the route matches what AutoGen expects.
Other Possible Causes
1) The model server is running but not reachable from your process
This happens in Docker, Kubernetes, WSL, or remote dev environments. localhost inside a container is not your laptop.
# Wrong: container can't reach host localhost
"base_url": "http://localhost:8000/v1"
# Right: use host.docker.internal or service name on the network
"base_url": "http://host.docker.internal:8000/v1"
2) Proxy or firewall blocks outbound traffic
Corporate networks often block direct calls to OpenAI/Azure/OpenRouter endpoints. The result looks like a timeout even though your code is correct.
# Check proxy env vars
echo $HTTP_PROXY
echo $HTTPS_PROXY
echo $NO_PROXY
If your environment requires a proxy:
import os
os.environ["HTTPS_PROXY"] = "http://proxy.company.local:8080"
os.environ["HTTP_PROXY"] = "http://proxy.company.local:8080"
3) Timeout is too low for the prompt size or model latency
Large prompts, slow models, or tool-heavy workflows can exceed default timeouts.
# Too aggressive for long prompts / slow endpoints
llm_config = {
"config_list": config_list,
"timeout": 10,
}
# Better for production debugging
llm_config = {
"config_list": config_list,
"timeout": 120,
}
4) Wrong API version or deployment name in Azure OpenAI
With Azure OpenAI, deployment_name, api_version, and endpoint must all match. A mismatch often surfaces as retrying requests that eventually time out.
# Example of a common mismatch pattern
{
"model": "gpt-4o-mini",
"api_type": "azure",
"api_base": "https://my-resource.openai.azure.com/",
"api_version": "2024-02-15-preview",
# deployment_name must exist in Azure portal exactly as configured
"deployment_name": "gpt-4o-mini-prod"
}
5) Tool function hangs before returning control
If your agent calls a Python tool that waits on another service, AutoGen will wait too. The timeout may be blamed on AutoGen even though the real problem is inside your tool.
def lookup_customer(customer_id: str):
# Bad: no timeout on downstream request
return requests.get(f"https://internal-api/customers/{customer_id}").json()
Fix it by setting timeouts on downstream calls:
def lookup_customer(customer_id: str):
return requests.get(
f"https://internal-api/customers/{customer_id}",
timeout=10,
).json()
How to Debug It
- •
Verify the endpoint outside AutoGen
- •Hit the same URL with
curlor Postman. - •If this fails, AutoGen is not the problem.
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" - •Hit the same URL with
- •
Reduce the setup to one agent and one prompt
- •Remove tools, memory, group chat logic, and nested agents.
- •If the minimal case works, add components back one by one.
- •
Print the resolved config
- •Check
base_url,api_keysource, deployment name, and timeout. - •Most “mystery” timeouts are bad config values hidden in env vars.
print(config_list) print(llm_config) - •Check
- •
Turn on HTTP/client logging
- •You want to see whether it never connects, retries forever, or times out after connect.
- •For
requests-based tool code:
import logging logging.basicConfig(level=logging.DEBUG)
Prevention
- •Use environment validation at startup:
- •fail fast if
OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT, orbase_urlare missing.
- •fail fast if
- •Set explicit timeouts everywhere:
- •LLM client timeout plus downstream HTTP timeouts in tools.
- •Keep a known-good health check:
- •one script that calls your exact model endpoint before running agents.
If you’re deploying AutoGen in production workflows for banking or insurance use cases, treat timeouts as configuration bugs first and model bugs second. In most cases, the fix is boring: correct endpoint, correct credentials, correct network path.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit