AutoGen Tutorial (TypeScript): deploying with Docker for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogendeploying-with-docker-for-advanced-developerstypescript

This tutorial shows how to package a TypeScript AutoGen agent into Docker, run it locally, and prepare it for deployment in a real environment. You need this when your agent works on your laptop but you want repeatable builds, clean runtime isolation, and a container image you can ship to CI, Kubernetes, or any container platform.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • An OpenAI API key set as OPENAI_API_KEY
  • A TypeScript project with:
    • typescript
    • tsx
    • @types/node
    • @openai/autogen
  • Basic familiarity with AutoGen agents and async/await
  • A terminal that can run docker build and docker run

Step-by-Step

  1. Start with a minimal TypeScript project structure that keeps source code separate from build output. For production containers, you want deterministic installs and a small runtime surface area.
mkdir autogen-docker-ts
cd autogen-docker-ts
npm init -y
npm install @openai/autogen
npm install -D typescript tsx @types/node
npx tsc --init
mkdir src
  1. Create the agent entrypoint in TypeScript. This example uses the real AutoGen imports from the TypeScript SDK and reads the model key from the environment, which is what you want in Docker and CI.
// src/index.ts
import { AssistantAgent } from "@openai/autogen";

async function main() {
  const agent = new AssistantAgent({
    name: "docker-agent",
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
    systemMessage: "You are a concise assistant for backend engineers.",
  });

  const result = await agent.run("Give me one Docker best practice for Node.js apps.");
  console.log(result);
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. Wire up TypeScript build scripts so the container can compile once and run the emitted JavaScript. Use a clean dist output directory; do not run TypeScript directly in production images unless you have a reason to keep the compiler around.
{
  "name": "autogen-docker-ts",
  "version": "1.0.0",
  "private": true,
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js",
    "dev": "tsx src/index.ts"
  },
  "dependencies": {
    "@openai/autogen": "^0.2.0"
  },
  "devDependencies": {
    "@types/node": "^22.0.0",
    "tsx": "^4.19.0",
    "typescript": "^5.6.0"
  }
}
  1. Make sure your TypeScript config emits ESM-compatible JavaScript and includes Node types. The main thing here is avoiding module mismatch errors inside Docker, which are common when local dev uses tsx but production uses plain node.
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "types": ["node"]
  },
  "include": ["src/**/*.ts"]
}
  1. Build a multi-stage Dockerfile so dependencies are installed once, the app is compiled in a builder stage, and only the runtime artifacts ship in the final image. This is the pattern you want for smaller images and fewer attack surfaces.
# syntax=docker/dockerfile:1

FROM node:20-alpine AS builder
WORKDIR /app

COPY package*.json tsconfig.json ./
RUN npm ci

COPY src ./src
RUN npm run build

FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production

COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force

COPY --from=builder /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Run the container with your API key injected at runtime, not baked into the image. That keeps credentials out of layers and makes promotion across environments straightforward.
docker build -t autogen-docker-ts .
docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  autogen-docker-ts

Testing It

Run npm run dev locally first to confirm the agent responds before you involve Docker. Then build the image and check that docker run prints an actual model response instead of a module error or missing-key failure.

If you see authentication issues, verify that OPENAI_API_KEY is present in the shell where you launch Docker. If you see ESM/CJS errors, re-check that both package.json has "type": "module" and tsconfig.json uses "module": "NodeNext".

For deployment readiness, inspect image size with docker images and confirm only runtime files exist in the final layer set.

Next Steps

  • Add structured logging around agent runs so you can trace prompts and outputs in production.
  • Move configuration into environment variables for model name, temperature, and timeout values.
  • Add health checks and wrap this container behind an HTTP API if other services need to call it programmatically

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides