AutoGen Tutorial (TypeScript): mocking LLM calls in tests for intermediate developers
This tutorial shows how to replace real LLM calls with deterministic mocks in AutoGen TypeScript tests. You need this when you want fast unit tests, stable CI, and zero dependency on API availability or token usage.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@autogenai/autogen - •A test runner like
vitest - •Optional:
dotenvif you still load real keys in non-test environments - •No OpenAI API key required for the mocked test path
Install the packages:
npm install @autogenai/autogen
npm install -D vitest typescript @types/node
Step-by-Step
- •Create a small agent wrapper that accepts an LLM client as a dependency.
This is the key move: don’t construct the model client inside your business logic, inject it so tests can swap in a mock.
// src/assistant.ts
import { AssistantAgent } from "@autogenai/autogen";
export type LlmClient = {
create: (params: {
messages: Array<{ role: string; content: string }>;
model?: string;
}) => Promise<{ choices: Array<{ message: { content: string } }> }>;
};
export function buildAssistant(llmClient: LlmClient) {
return new AssistantAgent({
name: "support_agent",
modelClient: llmClient,
systemMessage: "You are a concise support assistant.",
});
}
- •Add a mock client that returns fixed responses based on the prompt.
This keeps your tests deterministic and lets you verify downstream behavior without hitting a real model endpoint.
// test/mockLlmClient.ts
import type { LlmClient } from "../src/assistant";
export function createMockLlmClient(): LlmClient {
return {
async create({ messages }) {
const last = messages[messages.length - 1]?.content ?? "";
if (last.includes("refund")) {
return {
choices: [{ message: { content: "Refunds are processed within 5 business days." } }],
};
}
return {
choices: [{ message: { content: "I can help with that." } }],
};
},
};
}
- •Write a test that exercises your agent through the mocked client.
The point here is to test your orchestration code, not the model itself, so assert on the returned text and any branching logic you care about.
// test/assistant.test.ts
import { describe, expect, it } from "vitest";
import { buildAssistant } from "../src/assistant";
import { createMockLlmClient } from "./mockLlmClient";
describe("support agent", () => {
it("returns the refund response from the mock", async () => {
const assistant = buildAssistant(createMockLlmClient());
const result = await assistant.run({
messages: [{ role: "user", content: "I need a refund" }],
});
expect(result.messages.at(-1)?.content).toContain("Refunds are processed within 5 business days.");
});
});
- •If you want stronger control, mock at the module boundary instead of passing a fake client manually.
This is useful when your production code constructs AutoGen agents internally and you want to keep test setup minimal.
// test/module-mock.test.ts
import { describe, expect, it, vi } from "vitest";
vi.mock("@autogenai/autogen", () => {
class AssistantAgent {
constructor(_: unknown) {}
async run() {
return {
messages: [{ role: "assistant", content: "Mocked module response" }],
};
}
}
return { AssistantAgent };
});
import { AssistantAgent } from "@autogenai/autogen";
describe("module mock", () => {
it("uses the mocked AutoGen agent", async () => {
const agent = new AssistantAgent({});
const result = await agent.run({ messages: [{ role: "user", content: "hello" }] });
expect(result.messages[0].content).toBe("Mocked module response");
});
});
- •Keep one integration test for the real model and mark everything else as unit tests.
Mocking should cover your branching, parsing, retries, and tool selection logic; one live test is enough to confirm your credentials and provider wiring still work.
// src/real-agent.ts
import { AssistantAgent, OpenAIChatCompletionModel } from "@autogenai/autogen";
export function buildRealAssistant(apiKey: string) {
return new AssistantAgent({
name: "real_support_agent",
modelClient: new OpenAIChatCompletionModel({
apiKey,
model: "gpt-4o-mini",
}),
systemMessage: "You are a concise support assistant.",
});
}
Testing It
Run your tests with vitest. The mocked tests should pass instantly and consistently because they never make network calls.
If one of them fails, check whether your assertions depend on exact phrasing from the mock response or whether your code is constructing the agent in a way that bypasses injection. For CI, keep mocked tests in the default suite and gate any live-model checks behind an environment flag like RUN_LIVE_LLM_TESTS=1.
A good sanity check is to temporarily break the mock response text and confirm that only the expected test fails. That tells you your test is actually exercising the mocked path instead of accidentally calling a real provider.
Next Steps
- •Mock tool calls next, not just text responses.
- •Add snapshot tests for structured outputs like JSON extraction.
- •Build a thin adapter around AutoGen so every agent in your codebase uses the same injectable model interface.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit