CrewAI Tutorial (Python): deploying with Docker for advanced developers
This tutorial shows you how to package a CrewAI Python project into a Docker image and run it reliably in a containerized environment. You’d use this when you need reproducible deployments, cleaner dependency isolation, or a way to ship agents into CI/CD, servers, or internal platforms without environment drift.
What You'll Need
- •Python 3.10 or 3.11
- •Docker installed and running locally
- •A CrewAI project with basic agents/tasks already understood
- •
crewaiinstalled in your project - •
python-dotenvfor local secret loading - •An LLM API key, such as:
- •
OPENAI_API_KEY - •or another provider supported by your CrewAI setup
- •
- •A minimal project structure like:
- •
main.py - •
requirements.txt - •
Dockerfile - •
.env
- •
Step-by-Step
- •Start with a small CrewAI app that is easy to containerize. Keep the entrypoint deterministic and avoid interactive prompts, because Dockerized agents should run the same way in CI and production.
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from crewai.llm import LLM
load_dotenv()
llm = LLM(model="gpt-4o-mini")
researcher = Agent(
role="Research Analyst",
goal="Summarize the key points from the user's request",
backstory="You are precise and concise.",
llm=llm,
verbose=True,
)
task = Task(
description="Write a short summary of why Docker is useful for deploying CrewAI apps.",
expected_output="A concise summary with deployment benefits.",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
process=Process.sequential,
verbose=True,
)
if __name__ == "__main__":
result = crew.kickoff()
print(result)
- •Pin your dependencies before building the image. This keeps Docker builds stable and avoids surprises when upstream packages release breaking changes.
crewai==0.86.0
python-dotenv==1.0.1
- •Add a Dockerfile that installs dependencies first, then copies your application code. This ordering improves build caching and makes repeated builds much faster.
FROM python:3.11-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
RUN pip install --no-cache-dir --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "main.py"]
- •Keep secrets out of the image and pass them at runtime through environment variables. For local development, use
.env; for Docker runs, inject the same values explicitly.
export OPENAI_API_KEY="your-api-key-here"
docker build -t crewai-docker-demo .
docker run --rm \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
crewai-docker-demo
- •If you want a cleaner developer workflow, add a
.dockerignore. This prevents unnecessary files from bloating the build context and keeps images smaller.
__pycache__
*.pyc
*.pyo
*.pyd
.env
.git
.gitignore
venv
.venv
dist
build
- •For advanced deployments, split configuration from code so the same image can run across environments. Read model names and other runtime settings from env vars instead of hardcoding them.
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
from crewai.llm import LLM
load_dotenv()
model_name = os.getenv("CREWAI_MODEL", "gpt-4o-mini")
llm = LLM(model=model_name)
agent = Agent(
role="Deployment Assistant",
goal="Explain deployment details clearly",
backstory="You write production-ready infrastructure notes.",
llm=llm,
)
task = Task(
description="List two reasons Docker helps deploy CrewAI apps.",
expected_output="Two short bullet-style reasons.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task], process=Process.sequential)
print(crew.kickoff())
Testing It
Run the container once with your API key set and confirm you get a non-empty response printed to stdout. If the container exits immediately with an authentication error, your environment variable wiring is wrong.
Check that the image builds without reinstalling dependencies every time; if it rebuilds too slowly, verify that requirements.txt is copied before source code in the Dockerfile. Also confirm that .env is not baked into the image by inspecting the container environment at runtime instead of inside the build stage.
For a more realistic test, change CREWAI_MODEL between runs and confirm the same image behaves differently without rebuilding it. That’s the main benefit of runtime configuration in containerized agent systems.
Next Steps
- •Add
docker-compose.ymlso you can run CrewAI alongside Redis, Postgres, or internal services. - •Move from
CMD ["python", "main.py"]to a proper entrypoint script with structured logging and health checks. - •Learn how to externalize task inputs through mounted files or HTTP endpoints for production orchestration.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit