AutoGen Tutorial (Python): deploying with Docker for intermediate developers
This tutorial shows you how to package a working AutoGen Python agent setup into Docker, run it locally, and keep the environment reproducible across machines. You’d use this when your agent works on your laptop but needs a clean deployment target for teammates, CI, or a server.
What You'll Need
- •Python 3.10 or 3.11
- •Docker Desktop or Docker Engine installed
- •An OpenAI API key
- •
autogen-agentchatandautogen-extpackages - •A project folder with these files:
- •
main.py - •
requirements.txt - •
Dockerfile - •
.env
- •
Step-by-Step
- •Start with a minimal AutoGen agent script that reads its API key from the environment. This keeps secrets out of code and makes the same script work locally and inside Docker.
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
agent = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a concise assistant.",
)
result = await agent.run(task="Write one sentence about Dockerizing AutoGen.")
print(result.messages[-1].content)
if __name__ == "__main__":
asyncio.run(main())
- •Pin your dependencies so Docker builds are deterministic. For AutoGen, install the agent chat package plus the OpenAI extension that provides the model client.
autogen-agentchat==0.4.8
autogen-ext[openai]==0.4.8
python-dotenv==1.0.1
- •Add a
.envfile for local development, then make sure it is not committed to source control. Docker will read the same variable at runtime if you pass it through.
OPENAI_API_KEY=your_openai_api_key_here
- •Build a container image that installs dependencies and copies in your app code. Keep the image small by using an official slim Python base and avoid baking secrets into the image.
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY main.py .
CMD ["python", "main.py"]
- •Run the container with your API key injected at runtime. If you want to test locally before deploying anywhere else, this is the fastest path.
docker build -t autogen-docker-demo .
docker run --rm \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
autogen-docker-demo
- •If you prefer loading from a
.envfile during local testing, use Docker’s env-file support. This keeps your shell clean and matches how many teams wire configuration in production.
docker run --rm \
--env-file .env \
autogen-docker-demo
Testing It
When the container runs correctly, you should see a single assistant response printed to stdout. If you get an authentication error, check that OPENAI_API_KEY is actually available inside the container and not just on your host shell.
If the build fails, verify that your package versions match and that Docker has network access to download Python wheels. For debugging, run docker run --rm -it autogen-docker-demo bash and inspect the environment manually.
A good sanity check is to change the prompt in main.py, rebuild, and confirm the output changes predictably. That tells you your image is executing the exact code you expect, not some stale local state.
Next Steps
- •Add a
docker-compose.ymlfile if your agent needs Redis, Postgres, or another service alongside AutoGen. - •Move from a single-agent script to a multi-agent workflow using AutoGen team patterns.
- •Add structured logging so container logs are usable in Kubernetes or ECS later on.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit