AutoGen Tutorial (TypeScript): testing agents locally for advanced developers
This tutorial shows you how to run and test AutoGen agents locally in TypeScript without wiring up a full app first. You need this when you want fast iteration, deterministic debugging, and a clean way to validate agent behavior before pushing anything into a service or CI pipeline.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
ts-nodeortsx - •
@autogenai/autogeninstalled - •An OpenAI API key exported as
OPENAI_API_KEY - •A local terminal where you can run the script repeatedly
- •Basic familiarity with AutoGen agents, messages, and model clients
Step-by-Step
- •Start with a minimal TypeScript project and install the packages you need. If you already have a repo, just add AutoGen and a TypeScript runner so you can execute agent code directly.
mkdir autogen-local-testing
cd autogen-local-testing
npm init -y
npm install @autogenai/autogen dotenv
npm install -D typescript tsx @types/node
npx tsc --init --rootDir src --outDir dist --module nodenext --target es2022 --moduleResolution nodenext
mkdir src
- •Add your API key to a local environment file. Keep this out of source control so your tests stay portable across machines and environments.
cat > .env << 'EOF'
OPENAI_API_KEY=your_openai_api_key_here
EOF
- •Create a single-agent smoke test first. This is the fastest way to confirm your model client, API key, and local runtime are wired correctly before adding multi-agent orchestration.
// src/smoke-test.ts
import "dotenv/config";
import { AssistantAgent } from "@autogenai/autogen";
async function main() {
const agent = new AssistantAgent({
name: "tester",
model: "gpt-4o-mini",
systemMessage: "You are a concise test assistant.",
});
const result = await agent.run("Reply with exactly: local test passed");
console.log(result.messages.at(-1)?.content);
}
main().catch(console.error);
- •Run the smoke test locally. If this fails, fix the runtime issue here before moving on; do not debug multi-agent flows with a broken base setup.
npx tsx src/smoke-test.ts
- •Add an advanced local test that uses two agents and an in-memory conversation loop. This is the pattern you want for validating handoffs, critique flows, or reviewer/executor setups before shipping them into production.
// src/two-agent-test.ts
import "dotenv/config";
import { AssistantAgent } from "@autogenai/autogen";
async function main() {
const writer = new AssistantAgent({
name: "writer",
model: "gpt-4o-mini",
systemMessage: "Write short implementation notes.",
});
const reviewer = new AssistantAgent({
name: "reviewer",
model: "gpt-4o-mini",
systemMessage: "Review for correctness and missing edge cases.",
});
const draft = await writer.run("Draft a note on testing AutoGen agents locally.");
const review = await reviewer.run(
`Review this draft:\n\n${draft.messages.at(-1)?.content ?? ""}`
);
console.log("DRAFT:\n", draft.messages.at(-1)?.content);
console.log("\nREVIEW:\n", review.messages.at(-1)?.content);
}
main().catch(console.error);
- •Add assertions so your local test becomes repeatable instead of just readable. In practice, this is what turns an agent demo into something you can run during development or in CI.
// src/assertion-test.ts
import "dotenv/config";
import assert from "node:assert/strict";
import { AssistantAgent } from "@autogenai/autogen";
async function main() {
const agent = new AssistantAgent({
name: "tester",
model: "gpt-4o-mini",
systemMessage: "Reply with exact phrases when asked.",
});
const result = await agent.run("Reply with exactly: local test passed");
const text = String(result.messages.at(-1)?.content ?? "");
assert.match(text.toLowerCase(), /local test passed/);
console.log("Assertion passed:", text);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
Testing It
Run each script separately so you can isolate failures quickly. The smoke test should print the exact phrase you requested, while the two-agent version should show both a draft and a review response. If the assertion test passes, you now have a basic local harness that proves your agent can respond consistently enough for development workflows.
If something breaks, check these first:
- •
OPENAI_API_KEYis set in the shell runningtsx - •Your installed package version matches the import path used above
- •The model name is valid for your account
- •You are not mixing CommonJS and ESM settings in
tsconfig.json
Next Steps
- •Add structured output validation with Zod so responses become machine-checkable.
- •Wrap these scripts in
npm testor a small Vitest suite for repeatable regression testing. - •Extend the two-agent pattern into tool use, memory, and handoff workflows once the local baseline is stable.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit