Haystack Tutorial (TypeScript): testing agents locally for beginners
This tutorial shows how to run and test a Haystack agent locally in TypeScript without wiring it into a full backend. You need this when you want fast iteration on prompts, tools, and agent behavior before shipping anything to production.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project with
ts-nodeortsx - •
@haystack-ai/coreinstalled - •
zodinstalled for tool input validation - •An OpenAI API key set as
OPENAI_API_KEY - •A code editor and terminal
- •Basic familiarity with Haystack agents and tools
Step-by-Step
- •Start with a clean TypeScript project and install the dependencies you need for local agent testing. Keep this minimal so the test surface stays small.
npm init -y
npm install @haystack-ai/core zod
npm install -D typescript tsx @types/node
npx tsc --init
- •Create a tiny tool the agent can call during tests. For local testing, use deterministic tools first so you can tell whether the agent is choosing the right action.
// tools.ts
import { tool } from "@haystack-ai/core";
import { z } from "zod";
export const getPolicyStatus = tool({
name: "get_policy_status",
description: "Look up a policy status by policy number",
parameters: z.object({
policyNumber: z.string().min(5),
}),
execute: async ({ policyNumber }) => {
return {
policyNumber,
status: "active",
renewalDate: "2026-01-15",
};
},
});
- •Build an agent that can use the tool and answer in plain English. The key here is to keep the system prompt narrow so your local tests are easy to reason about.
// agent.ts
import { OpenAIChatGenerator } from "@haystack-ai/core";
import { Agent } from "@haystack-ai/core";
import { getPolicyStatus } from "./tools";
const generator = new OpenAIChatGenerator({
model: "gpt-4o-mini",
});
export const agent = new Agent({
llm: generator,
tools: [getPolicyStatus],
systemPrompt:
"You are a support agent for insurance policy lookup. Use tools when needed.",
});
- •Write a local test runner that sends a few prompts and prints the responses. This gives you a repeatable way to check whether the agent is calling tools correctly.
// test-agent.ts
import { agent } from "./agent";
async function main() {
const prompts = [
"What is the status of policy POL12345?",
"Check policy ABCDE and tell me if it is active.",
];
for (const prompt of prompts) {
const result = await agent.run(prompt);
console.log("\nPROMPT:", prompt);
console.log("ANSWER:", result.output);
}
}
main().catch((error) => {
console.error(error);
process.exit(1);
});
- •Run the test locally and inspect both the answer and whether the tool was used. If you want better debugging, print intermediate steps or log tool calls inside the tool function.
OPENAI_API_KEY=your_key_here npx tsx test-agent.ts
Testing It
You should see one response per prompt, and each response should mention an active policy with the renewal date returned by the tool. If the model answers without using the tool, tighten your system prompt or make the user query more explicit.
A good local test also checks failure behavior. Try passing an invalid policy number like "ABC" and confirm Zod rejects it before the tool executes.
For repeatability, keep your prompts in a small fixture array and run them every time you change prompts or tools. That gives you fast feedback before you move into integration tests.
Next Steps
- •Add more tools, then test which ones the agent chooses under different prompts
- •Capture structured outputs instead of plain text for easier assertions in automated tests
- •Move from manual script runs to Jest or Vitest so you can run agent tests in CI
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit