AutoGen Tutorial (TypeScript): deploying with Docker for beginners
This tutorial shows you how to package a TypeScript AutoGen agent into a Docker image and run it locally with environment variables, so you can move from laptop-only development to something reproducible. You need this when your agent works in Node.js but you want the same runtime, dependencies, and config every time you start it or hand it to someone else.
What You'll Need
- •Node.js 20+
- •Docker Desktop or Docker Engine
- •An OpenAI API key
- •A TypeScript project with AutoGen installed:
- •
@autogen/core - •
@autogen/openai
- •
- •Basic familiarity with:
- •
npm - •
tsc - •environment variables
- •
- •A
.envfile for local development - •A working internet connection for pulling base images and calling the model API
Step-by-Step
- •Start with a minimal TypeScript project and install the packages you need. I’m using the official AutoGen core package plus the OpenAI model client, because that keeps the example close to how you’d actually ship it.
mkdir autogen-docker-demo
cd autogen-docker-demo
npm init -y
npm install @autogen/core @autogen/openai dotenv
npm install -D typescript @types/node tsx
npx tsc --init
- •Create a small agent entrypoint that reads your API key from the environment and runs one prompt. This is intentionally simple: one file, one agent, one response path, so you can verify Docker before adding tools or multi-agent workflows.
// src/index.ts
import "dotenv/config";
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";
async function main() {
const modelClient = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new AssistantAgent({
name: "docker-demo-agent",
modelClient,
systemMessage: "You are a concise assistant.",
});
const result = await agent.run({
task: "Write one sentence explaining why Docker helps deploy TypeScript agents.",
});
console.log(result.messages.at(-1)?.content);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
- •Add a build script and make sure TypeScript outputs to a
distfolder. In Docker, you want to run compiled JavaScript, not source TypeScript, because that keeps the container smaller and avoids runtime transpilation issues.
{
"name": "autogen-docker-demo",
"private": true,
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "tsx src/index.ts"
},
"dependencies": {
"@autogen/core": "^0.4.0",
"@autogen/openai": "^0.4.0",
"dotenv": "^16.4.5"
},
"devDependencies": {
"@types/node": "^22.13.10",
"tsx": "^4.19.2",
"typescript": "^5.8.2"
}
}
- •Configure TypeScript for Node ESM output and create a
.envfile for local runs. If your project uses ESM imports, your compiler settings need to match that or Docker will faithfully reproduce the same module errors you see locally.
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*.ts"]
}
# .env
OPENAI_API_KEY=your_api_key_here
- •Add a Dockerfile that installs dependencies, builds the app, and runs the compiled output. This is the standard production pattern: copy manifests first for better caching, install once, then copy source and build.
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm install
COPY tsconfig.json ./
COPY src ./src
COPY .env ./.env
RUN npm run build
FROM node:20-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY package.json package-lock.json* ./
RUN npm install --omit=dev
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
- •Build and run the container with your API key passed at runtime. Don’t bake secrets into the image; keep them in environment variables so the same image can run in dev, staging, or production.
docker build -t autogen-docker-demo .
docker run --rm \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
autogen-docker-demo
Testing It
If everything is wired correctly, the container should print a short answer from your agent instead of crashing on startup. The most common failure points are missing OPENAI_API_KEY, an incorrect model name, or an ESM/TypeScript mismatch between your local config and what Docker builds.
If you want a quick sanity check before building the image, run it locally first:
npm run dev
Then compare that output with the container output from docker run. If both work, your deployment path is stable enough to extend with tools, memory, or multi-agent orchestration.
Next Steps
- •Add a healthcheck endpoint or CLI exit codes so your container can be monitored by an orchestrator.
- •Split secrets from configuration by using Docker Compose or Kubernetes secrets.
- •Move from a single assistant to an AutoGen team pattern once your deployment pipeline is stable
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit