AutoGen Tutorial (Python): deploying with Docker for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogendeploying-with-docker-for-advanced-developerspython

This tutorial shows how to package a real AutoGen Python agent workflow into Docker so you can run it consistently across laptops, CI, and server environments. You need this when your agent stops being a notebook experiment and starts needing repeatable deployment, isolated dependencies, and predictable runtime behavior.

What You'll Need

  • Python 3.11+
  • Docker Desktop or Docker Engine
  • An OpenAI API key exported as OPENAI_API_KEY
  • autogen-agentchat installed in your project
  • python-dotenv for local environment loading
  • A working AutoGen setup with at least one assistant agent
  • Basic familiarity with docker build and docker run

Step-by-Step

  1. Start by creating a minimal project structure that keeps your agent code separate from container concerns. For deployment work, this matters because you want the same code path locally and inside Docker.
autogen-docker-demo/
├── app.py
├── requirements.txt
├── Dockerfile
└── .dockerignore
  1. Add the dependencies your container will install. I’m using the current AutoGen agent chat package plus python-dotenv so the app can load local secrets during development.
# requirements.txt
autogen-agentchat==0.4.8
python-dotenv==1.0.1
  1. Write a small AutoGen program that creates an assistant agent and runs a single task. This version is intentionally simple, but it uses real AutoGen imports and the same pattern you’ll use in a larger service.
# app.py
import os
from dotenv import load_dotenv

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

load_dotenv()

def main() -> None:
    client = OpenAIChatCompletionClient(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        api_key=os.environ["OPENAI_API_KEY"],
    )

    agent = AssistantAgent(
        name="assistant",
        model_client=client,
        system_message="You are a concise assistant.",
    )

    task = TextMessage(content="Write one sentence about why Docker helps deploy agents.", source="user")
    result = agent.run_sync(task)

    for message in result.messages:
        print(f"{message.source}: {message.content}")

if __name__ == "__main__":
    main()
  1. Create a Dockerfile that installs dependencies, copies the app, and runs it with Python unbuffered output. The important part here is keeping the image small and deterministic, not baking secrets into the image.
FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["python", "app.py"]
  1. Add a .dockerignore file so your build context stays clean. This avoids sending virtual environments, caches, and local secrets into the image build.
__pycache__/
*.pyc
*.pyo
*.pyd
.git/
.env
.venv/
venv/
dist/
build/
  1. Build and run the container with your API key passed at runtime. In production, keep secrets in your orchestrator or secret manager; for local testing, environment variables are enough.
docker build -t autogen-docker-demo .
docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  -e OPENAI_MODEL="gpt-4o-mini" \
  autogen-docker-demo

Testing It

If everything is wired correctly, the container should print at least one assistant response to stdout and exit cleanly with code 0. If you see an authentication error, check that OPENAI_API_KEY is actually present in the shell running docker run. If you see an import error, confirm that requirements.txt matches the package versions installed inside the image.

For a more realistic deployment check, rebuild after changing only app.py and make sure the container still works without modifying anything else. That tells you your image is reproducible and not depending on local state from your machine.

Next Steps

  • Add structured logging so agent runs are observable in containers.
  • Replace the single-turn example with a multi-agent workflow using AutoGen group chat.
  • Mount a config file or connect Docker Compose to external services like Redis or PostgreSQL for stateful agents

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides