AutoGen Tutorial (TypeScript): deploying with Docker for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogendeploying-with-docker-for-intermediate-developerstypescript

This tutorial shows you how to package a TypeScript AutoGen agent app into a Docker image and run it consistently across local machines and servers. You need this when your agent works on your laptop but you want repeatable builds, environment isolation, and an easy path to deployment.

What You'll Need

  • Node.js 20+
  • Docker Desktop or Docker Engine
  • An OpenAI API key
  • A TypeScript project with AutoGen installed
  • npm or pnpm
  • Basic familiarity with:
    • tsconfig.json
    • environment variables
    • container builds

Install the dependencies first:

npm init -y
npm install @autogenai/autogen openai dotenv
npm install -D typescript tsx @types/node
npx tsc --init

Step-by-Step

  1. Create a small AutoGen agent entrypoint in TypeScript. This example uses the OpenAI client under the hood and keeps the app simple enough to containerize cleanly.
import "dotenv/config";
import { AssistantAgent } from "@autogenai/autogen";
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

async function main() {
  const agent = new AssistantAgent({
    name: "docker-agent",
    modelClient: client,
    systemMessage: "You are a concise assistant.",
  });

  const result = await agent.run({
    task: "Write one sentence explaining why Docker helps deploy TypeScript agents.",
  });

  console.log(result.messages.at(-1)?.content);
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. Add a production-friendly TypeScript configuration and scripts. The key thing here is compiling to plain JavaScript for the container, rather than running TypeScript directly in production.
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "dist",
    "rootDir": "src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true
  },
  "include": ["src"]
}
{
  "name": "autogen-docker-ts",
  "private": true,
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js",
    "dev": "tsx src/index.ts"
  }
}
  1. Put your source file in src/index.ts, then create a .env file locally. In Docker, you should inject secrets at runtime instead of baking them into the image.
mkdir -p src
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
EOF
  1. Add a multi-stage Dockerfile. This keeps build tooling out of the runtime image, which is what you want for smaller images and fewer attack surfaces.
FROM node:20-alpine AS builder

WORKDIR /app

COPY package*.json tsconfig.json ./
RUN npm ci

COPY src ./src
RUN npm run build

FROM node:20-alpine AS runner

WORKDIR /app

ENV NODE_ENV=production

COPY package*.json ./
RUN npm ci --omit=dev

COPY --from=builder /app/dist ./dist

CMD ["node", "dist/index.js"]
  1. Add a .dockerignore so your image stays lean and doesn’t copy local junk into the build context. This matters more than people think when agents start pulling in logs, caches, or test artifacts.
node_modules
dist
.env
.git
.gitignore
Dockerfile
README.md
  1. Build and run the container with your API key passed as an environment variable. If the container starts correctly, you should see the agent response printed to stdout.
docker build -t autogen-docker-ts .

docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  autogen-docker-ts

Testing It

The first check is whether docker build completes without errors. If it fails, it’s usually a TypeScript compile issue, a missing dependency, or a mismatch between your package.json module type and tsconfig.json.

Next, confirm the container exits cleanly after printing one response from the agent. If it hangs, your app probably has an open handle or an unhandled async path that never resolves.

If you want a stricter verification pass, run it twice and compare output shape rather than exact text. For LLM-backed apps, deterministic output is not realistic unless you lock temperature and model behavior very tightly.

Next Steps

  • Add structured logging with JSON output so container logs are usable in CloudWatch or Stackdriver.
  • Move from a single-agent script to an AutoGen group chat flow with explicit role separation.
  • Add health checks and an HTTP wrapper so Kubernetes or ECS can manage the service properly.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides