Haystack Tutorial (TypeScript): mocking LLM calls in tests for beginners
This tutorial shows you how to replace real LLM calls with deterministic mocks in your Haystack TypeScript tests. You need this when your unit tests must be fast, offline, and stable instead of burning API credits and failing because a model changed behavior.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project already using Haystack
- •
@haystack/core - •A test runner like
vitest - •No API key required for the mocked tests
- •Optional: an LLM provider key if you want to compare mocked tests with live integration tests later
Step-by-Step
- •Start with a small pipeline that calls an LLM component. The point is to test the orchestration code, not the model itself.
import { defineComponent, Pipeline } from "@haystack/core";
const PromptBuilder = defineComponent({
name: "PromptBuilder",
inputs: ["topic"],
outputs: ["prompt"],
run({ topic }: { topic: string }) {
return { prompt: `Write one sentence about ${topic}.` };
},
});
const LLM = defineComponent({
name: "LLM",
inputs: ["prompt"],
outputs: ["reply"],
async run({ prompt }: { prompt: string }) {
return { reply: `REAL_MODEL_RESPONSE: ${prompt}` };
},
});
export const pipeline = new Pipeline();
pipeline.addComponent("builder", PromptBuilder);
pipeline.addComponent("llm", LLM);
pipeline.connect("builder.prompt", "llm.prompt");
- •In production code, keep the pipeline construction separate from the execution path. That makes it easy to swap the real component for a fake one in tests without touching business logic.
import { pipeline } from "./pipeline";
export async function generateTopicSummary(topic: string) {
const result = await pipeline.run({
builder: { topic },
});
return result.llm.reply as string;
}
- •Build a mock component for tests that returns fixed output based on input. This is the cleanest way to avoid network calls while still exercising Haystack wiring.
import { defineComponent } from "@haystack/core";
export const MockLLM = defineComponent({
name: "MockLLM",
inputs: ["prompt"],
outputs: ["reply"],
async run({ prompt }: { prompt: string }) {
if (prompt.includes("payments")) {
return { reply: "Mocked answer about payments." };
}
return { reply: "Mocked generic answer." };
},
});
- •Create a test-specific pipeline that uses the mock instead of the real LLM. Your test should assert on exact strings so failures are obvious and debugging stays simple.
import { Pipeline } from "@haystack/core";
import { describe, expect, it } from "vitest";
import { MockLLM } from "./mock-llm";
const TestPromptBuilder = {
name: "TestPromptBuilder",
inputs: ["topic"],
outputs: ["prompt"],
run({ topic }: { topic: string }) {
return { prompt: `Write one sentence about ${topic}.` };
},
};
describe("generateTopicSummary", () => {
it("returns mocked output without calling an API", async () => {
const testPipeline = new Pipeline();
testPipeline.addComponent("builder", TestPromptBuilder);
testPipeline.addComponent("llm", MockLLM);
testPipeline.connect("builder.prompt", "llm.prompt");
const result = await testPipeline.run({
builder: { topic: "payments" },
});
expect(result.llm.reply).toBe("Mocked answer about payments.");
});
});
- •If your code already depends on an abstraction layer, mock at that boundary instead of inside every test. This keeps your tests shorter and avoids duplicating pipeline setup everywhere.
export interface TextGenerator {
generate(prompt: string): Promise<string>;
}
export class MockTextGenerator implements TextGenerator {
async generate(prompt: string): Promise<string> {
return prompt.includes("payments")
? "Mocked answer about payments."
: "Mocked generic answer.";
}
}
Testing It
Run your test suite with vitest or your preferred runner and confirm that no network traffic is generated during execution. If you want to be strict, disable outbound internet access in CI for unit-test jobs so any accidental real LLM call fails immediately.
You should also add one assertion per important branch in your mock logic. For example, verify both the “payments” path and the default path so you know your orchestration handles different prompts correctly.
If a test starts failing after a prompt change, inspect the expected input passed into the mock first. In practice, most broken Haystack tests come from wiring mistakes, not from the mock itself.
Next Steps
- •Split your suite into unit tests with mocks and integration tests with real providers.
- •Add snapshot testing for generated prompts so you can catch prompt regressions early.
- •Wrap Haystack components behind interfaces so swapping mocks stays trivial across services.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit