CrewAI Tutorial (TypeScript): deploying with Docker for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaideploying-with-docker-for-advanced-developerstypescript

This tutorial shows you how to package a CrewAI TypeScript agent into a Docker image, run it locally, and prepare it for deployment in a real environment. You need this when your agent is no longer a laptop script and has to run the same way in CI, staging, and production.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • A CrewAI TypeScript project already initialized
  • An OpenAI API key or another model provider key supported by your stack
  • npm or pnpm
  • Basic familiarity with CrewAI agents, tasks, and crews
  • A .env file for secrets

Step-by-Step

  1. Start with a minimal TypeScript project layout that separates source code from runtime config. For Docker deployments, keep the app entrypoint small and make environment variables the only runtime dependency.
mkdir crewai-docker-demo
cd crewai-docker-demo
npm init -y
npm install @crewai/crewai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
mkdir -p src
  1. Create your crew in TypeScript using real CrewAI imports. This example defines one researcher agent, one task, and one crew that can be executed from a single entrypoint.
// src/index.ts
import "dotenv/config";
import { Agent, Crew, Task } from "@crewai/crewai";

const researcher = new Agent({
  role: "Research Analyst",
  goal: "Summarize the latest risk signals for an insurance underwriting team",
  backstory: "You analyze operational data and produce concise executive summaries.",
});

const task = new Task({
  description: "Write a short summary of key underwriting risks for Q4.",
  expectedOutput: "A 3-bullet summary with clear business language.",
  agent: researcher,
});

const crew = new Crew({
  agents: [researcher],
  tasks: [task],
});

const result = await crew.kickoff();
console.log(result);
  1. Add scripts so you can run the app locally before containerizing it. This catches import issues early and confirms your TypeScript entrypoint is valid.
{
  "name": "crewai-docker-demo",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "dev": "tsx src/index.ts",
    "build": "tsc",
    "start": "node dist/index.js"
  }
}
  1. Configure TypeScript for Node ESM output. Docker builds are much easier when tsc emits clean JavaScript into a dedicated dist folder.
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true
  },
  "include": ["src"]
}
  1. Add a production-grade Dockerfile with a multi-stage build. The first stage installs dependencies and compiles TypeScript; the second stage ships only the compiled output plus production dependencies.
FROM node:20-slim AS builder

WORKDIR /app

COPY package*.json tsconfig.json ./
RUN npm ci

COPY src ./src
RUN npm run build

FROM node:20-slim AS runner

WORKDIR /app
ENV NODE_ENV=production

COPY package*.json ./
RUN npm ci --omit=dev

COPY --from=builder /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Pass secrets at runtime instead of baking them into the image. For local testing, use an env file; for deployment, inject the same variables through your platform’s secret manager.
OPENAI_API_KEY=your_key_here
docker build -t crewai-ts-demo .
docker run --rm --env-file .env crewai-ts-demo

Testing It

Run npm run dev first to confirm the crew executes outside Docker. If that works but the container fails, your problem is usually module resolution, missing runtime files, or an env variable not being passed through.

Then run docker build and inspect whether the image builds without pulling in dev-only tools at runtime. If you want to verify parity with production, use docker run --rm --env-file .env crewai-ts-demo and compare the output to your local execution.

If you’re deploying to Kubernetes or ECS later, keep the same image and only change how secrets are injected. That gives you one artifact across local dev, CI, staging, and prod.

Next Steps

  • Add structured logging so each task emits traceable JSON logs.
  • Split your crew into multiple agents and tasks with explicit handoffs.
  • Add health checks and a lightweight HTTP wrapper if you want this container behind an API gateway.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides