Haystack Tutorial (TypeScript): adding tool use for beginners
This tutorial shows you how to add tool use to a Haystack TypeScript agent so it can call external functions instead of guessing. You need this when your assistant must fetch live data, query internal systems, or run deterministic business logic before answering.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project initialized with
npmorpnpm - •Haystack JS/TS packages installed:
- •
@haystack/core - •
@haystack/openai
- •
- •An OpenAI API key set in your environment
- •A model that supports tool calling, such as
gpt-4o-mini - •Basic familiarity with Haystack pipelines and chat messages
Step-by-Step
- •Start by installing the packages and setting your API key. Keep the model name in an environment variable too, since you will likely swap it later when testing different models.
npm install @haystack/core @haystack/openai
export OPENAI_API_KEY="your-key-here"
export OPENAI_MODEL="gpt-4o-mini"
- •Define a real tool as a plain TypeScript function. The important part is the schema: the model needs a clear input shape, and your code needs deterministic output.
export function getPolicyStatus(policyNumber: string) {
const mockDb: Record<string, string> = {
"POL-1001": "Active",
"POL-1002": "Lapsed",
"POL-1003": "Pending Review",
};
return {
policyNumber,
status: mockDb[policyNumber] ?? "Not Found",
};
}
- •Build a chat pipeline with a model that can call tools. In Haystack, you pass the tool to the generator and let the model decide when to use it.
import { ChatMessage, Pipeline } from "@haystack/core";
import { OpenAIChatGenerator } from "@haystack/openai";
import { getPolicyStatus } from "./tools.js";
const generator = new OpenAIChatGenerator({
apiKey: process.env.OPENAI_API_KEY!,
model: process.env.OPENAI_MODEL ?? "gpt-4o-mini",
});
const pipeline = new Pipeline();
pipeline.addComponent("llm", generator);
- •Add a simple tool-calling loop. The first response may request a tool; if it does, run the function and send the result back as a tool message.
async function main() {
const userMessage = ChatMessage.fromUser("What is the status of policy POL-1002?");
const first = await pipeline.run({
llm: {
messages: [userMessage],
tools: [
{
name: "getPolicyStatus",
description: "Look up an insurance policy status by policy number.",
parameters: {
type: "object",
properties: {
policyNumber: { type: "string" },
},
required: ["policyNumber"],
additionalProperties: false,
},
},
],
},
});
console.log(first);
}
main();
- •Execute the tool yourself when the model asks for it, then send the result back into the conversation. This is the part beginners usually miss: tool use is not magic, it is a request-response loop.
import { ChatMessage } from "@haystack/core";
async function runToolLoop() {
const messages = [ChatMessage.fromUser("Check policy POL-1001")];
const response = await generator.run({
messages,
tools: [
{
name: "getPolicyStatus",
description: "Look up an insurance policy status by policy number.",
parameters: {
type: "object",
properties: {
policyNumber: { type: "string" },
},
required: ["policyNumber"],
additionalProperties: false,
},
},
],
});
const toolCall = response.replies?.[0]?.toolCalls?.[0];
if (!toolCall) return console.log(response.replies?.[0]?.content);
const result = getPolicyStatus(toolCall.arguments.policyNumber);
const finalResponse = await generator.run({
messages: [
...messages,
response.replies[0],
ChatMessage.fromTool(JSON.stringify(result), toolCall.id),
],
});
console.log(finalResponse.replies?.[0]?.content);
}
runToolLoop();
Testing It
Run the script with a few different policy numbers and check that valid IDs return "Active", "Lapsed", or "Pending Review". Then try an unknown ID like POL-9999 and confirm the tool returns "Not Found" instead of the model inventing an answer.
If you want to verify actual tool calling, watch for a first assistant message that contains a toolCalls entry. If your model just answers directly, your prompt may be too loose or your chosen model may not be configured for tools.
For production-style testing, log three things:
- •user input
- •tool arguments received from the model
- •final assistant output
That gives you enough signal to catch bad schemas, malformed arguments, and accidental hallucinations.
Next Steps
- •Add multiple tools and route between them with stricter JSON schemas.
- •Wrap real APIs behind tools, like claims lookup, customer profile search, or payment status checks.
- •Add retries and validation around tool execution before sending results back to the model.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit