AutoGen Tutorial (TypeScript): deploying with Docker for beginners

By Cyprian AaronsUpdated 2026-04-21
autogendeploying-with-docker-for-beginnerstypescript

This tutorial shows you how to package a TypeScript AutoGen agent into a Docker image and run it locally with environment variables, so you can move from laptop-only development to something reproducible. You need this when your agent works in Node.js but you want the same runtime, dependencies, and config every time you start it or hand it to someone else.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • An OpenAI API key
  • A TypeScript project with AutoGen installed:
    • @autogen/core
    • @autogen/openai
  • Basic familiarity with:
    • npm
    • tsc
    • environment variables
  • A .env file for local development
  • A working internet connection for pulling base images and calling the model API

Step-by-Step

  1. Start with a minimal TypeScript project and install the packages you need. I’m using the official AutoGen core package plus the OpenAI model client, because that keeps the example close to how you’d actually ship it.
mkdir autogen-docker-demo
cd autogen-docker-demo
npm init -y
npm install @autogen/core @autogen/openai dotenv
npm install -D typescript @types/node tsx
npx tsc --init
  1. Create a small agent entrypoint that reads your API key from the environment and runs one prompt. This is intentionally simple: one file, one agent, one response path, so you can verify Docker before adding tools or multi-agent workflows.
// src/index.ts
import "dotenv/config";
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

async function main() {
  const modelClient = new OpenAIChatCompletionClient({
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
  });

  const agent = new AssistantAgent({
    name: "docker-demo-agent",
    modelClient,
    systemMessage: "You are a concise assistant.",
  });

  const result = await agent.run({
    task: "Write one sentence explaining why Docker helps deploy TypeScript agents.",
  });

  console.log(result.messages.at(-1)?.content);
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. Add a build script and make sure TypeScript outputs to a dist folder. In Docker, you want to run compiled JavaScript, not source TypeScript, because that keeps the container smaller and avoids runtime transpilation issues.
{
  "name": "autogen-docker-demo",
  "private": true,
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js",
    "dev": "tsx src/index.ts"
  },
  "dependencies": {
    "@autogen/core": "^0.4.0",
    "@autogen/openai": "^0.4.0",
    "dotenv": "^16.4.5"
  },
  "devDependencies": {
    "@types/node": "^22.13.10",
    "tsx": "^4.19.2",
    "typescript": "^5.8.2"
  }
}
  1. Configure TypeScript for Node ESM output and create a .env file for local runs. If your project uses ESM imports, your compiler settings need to match that or Docker will faithfully reproduce the same module errors you see locally.
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  },
  "include": ["src/**/*.ts"]
}
# .env
OPENAI_API_KEY=your_api_key_here
  1. Add a Dockerfile that installs dependencies, builds the app, and runs the compiled output. This is the standard production pattern: copy manifests first for better caching, install once, then copy source and build.
FROM node:20-alpine AS builder

WORKDIR /app

COPY package.json package-lock.json* ./
RUN npm install

COPY tsconfig.json ./
COPY src ./src
COPY .env ./.env

RUN npm run build

FROM node:20-alpine

WORKDIR /app
ENV NODE_ENV=production

COPY package.json package-lock.json* ./
RUN npm install --omit=dev

COPY --from=builder /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Build and run the container with your API key passed at runtime. Don’t bake secrets into the image; keep them in environment variables so the same image can run in dev, staging, or production.
docker build -t autogen-docker-demo .
docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  autogen-docker-demo

Testing It

If everything is wired correctly, the container should print a short answer from your agent instead of crashing on startup. The most common failure points are missing OPENAI_API_KEY, an incorrect model name, or an ESM/TypeScript mismatch between your local config and what Docker builds.

If you want a quick sanity check before building the image, run it locally first:

npm run dev

Then compare that output with the container output from docker run. If both work, your deployment path is stable enough to extend with tools, memory, or multi-agent orchestration.

Next Steps

  • Add a healthcheck endpoint or CLI exit codes so your container can be monitored by an orchestrator.
  • Split secrets from configuration by using Docker Compose or Kubernetes secrets.
  • Move from a single assistant to an AutoGen team pattern once your deployment pipeline is stable

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides