AutoGen Tutorial (TypeScript): testing agents locally for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogentesting-agents-locally-for-intermediate-developerstypescript

This tutorial shows you how to run and test AutoGen agents locally in TypeScript without wiring them into your full app. You’ll build a small, reproducible harness so you can validate agent behavior, inspect messages, and catch bad prompts before they hit production.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project with ts-node or a build step via tsc
  • @autogenai/autogen installed
  • dotenv installed for local environment variables
  • An OpenAI API key in .env
  • Basic familiarity with AutoGen agents, messages, and model configuration

Install the packages first:

npm install @autogenai/autogen dotenv
npm install -D typescript ts-node @types/node

Create a .env file:

OPENAI_API_KEY=your_key_here

Step-by-Step

  1. Create a minimal TypeScript entrypoint for local testing. Keep it small and deterministic so you can rerun the same conversation while you tune prompts and tool behavior.
import "dotenv/config";
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
  systemMessage: "You are a concise assistant that answers in one paragraph.",
});

const user = new UserProxyAgent({
  name: "user",
});

async function main() {
  const result = await user.initiateChat(assistant, {
    message: "Explain what local agent testing is in one sentence.",
    maxTurns: 2,
  });

  console.log(JSON.stringify(result, null, 2));
}

main().catch(console.error);
  1. Add a second agent when you need to test multi-agent handoff locally. This is useful for validating role boundaries, especially when one agent drafts and another reviews.
import "dotenv/config";
import { AssistantAgent } from "@autogenai/autogen";

const drafter = new AssistantAgent({
  name: "drafter",
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
  systemMessage: "Draft short policy summaries.",
});

const reviewer = new AssistantAgent({
  name: "reviewer",
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
  systemMessage: "Review drafts for clarity and correctness.",
});

async function main() {
  const draft = await drafter.run("Summarize the purpose of KYC checks.");
  const review = await reviewer.run(`Review this draft:\n${draft.response}`);

  console.log("DRAFT:", draft.response);
  console.log("REVIEW:", review.response);
}

main().catch(console.error);
  1. Capture structured output so your tests can assert on fields instead of brittle free text. For local testing, this is the difference between “it seems fine” and “I can prove it’s fine.”
import "dotenv/config";
import { AssistantAgent } from "@autogenai/autogen";

type TicketSummary = {
  severity: "low" | "medium" | "high";
  summary: string;
};

const agent = new AssistantAgent({
  name: "classifier",
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
  systemMessage:
    'Return JSON only with keys "severity" and "summary".',
});

async function main() {
  const result = await agent.run(
    "Customer reports duplicate card charges over three days."
  );

  const parsed = JSON.parse(result.response) as TicketSummary;
  console.log(parsed.severity);
  console.log(parsed.summary);
}

main().catch(console.error);
  1. Run the same conversation through a local assertion script. This gives you a repeatable smoke test you can wire into CI later.
import "dotenv/config";
import assert from "node:assert/strict";
import { AssistantAgent } from "@autogenai/autogen";

async function main() {
  const agent = new AssistantAgent({
    name: "tester",
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
    systemMessage: 'Return JSON only with keys "answer" and "confidence".',
  });

  const result = await agent.run("What is the capital of France?");
  const output = JSON.parse(result.response) as {
    answer?: string;
    confidence?: number;
  };

  assert.equal(output.answer?.toLowerCase(), "paris");
  assert.ok((output.confidence ?? 0) >= 0.7);

  console.log("Local test passed");
}

main().catch(console.error);
  1. Wrap it in an npm script so testing becomes one command instead of manual copy-paste. That makes it practical to rerun after every prompt or tool change.
{
  "scripts": {
    "testაგენტ": "ts-node src/test-agent.ts"
  }
}

Run it with:

npm run test-agent

Testing It

Start by running the script against a simple prompt with an obvious expected answer, like a geography question or a fixed-format JSON response. If the agent returns malformed JSON or drifts from the requested format, tighten the system message before adding tools or more complex workflows.

Then test failure paths by giving ambiguous input and checking whether your assertions fail loudly. For multi-agent setups, verify each agent stays inside its role and that handoff text is predictable enough for downstream parsing.

If you’re using tools later, keep this same harness and stub external calls at the boundary. That way your local tests cover orchestration logic without depending on live systems every time.

Next Steps

  • Add tool calling and mock the tool responses locally before hitting real APIs
  • Move these assertions into Jest or Vitest so they run in CI
  • Log message history for each run so you can debug prompt regressions quickly

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides