Haystack Tutorial (TypeScript): adding tool use for advanced developers
This tutorial shows you how to add tool use to a Haystack TypeScript pipeline so an LLM can call external functions, inspect results, and continue reasoning with structured outputs. You need this when your agent has to do more than chat: look up records, calculate values, hit internal APIs, or route work based on live data.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •A Haystack TypeScript project already set up
- •
@haystack/core - •
@haystack/openai - •An OpenAI API key in
OPENAI_API_KEY - •A terminal with
npmorpnpm
Step-by-Step
- •Start with a minimal Haystack setup and an OpenAI generator. The important part here is that you can already run a prompt through the model before adding tools.
import { OpenAIChatGenerator } from "@haystack/openai";
import { ChatMessage } from "@haystack/core";
const generator = new OpenAIChatGenerator({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const messages = [
ChatMessage.fromUser("Say hello in one sentence."),
];
const response = await generator.run({ messages });
console.log(response.replies[0].content);
- •Define a real tool as a plain TypeScript function with a JSON schema. Keep the input contract strict; tool calls fail in ugly ways when arguments are vague or inconsistent.
import { z } from "zod";
const accountLookupInput = z.object({
accountId: z.string().min(1),
});
async function lookupAccount(args: { accountId: string }) {
const accounts: Record<string, { name: string; status: string; balance: number }> = {
"A-1001": { name: "Jane Doe", status: "active", balance: 1240.55 },
"A-2002": { name: "Mark Lee", status: "delinquent", balance: -88.12 },
};
return accounts[args.accountId] ?? { error: "account_not_found" };
}
export const accountLookupTool = {
name: "lookup_account",
description: "Look up a customer account by account ID.",
parametersSchema: accountLookupInput,
invoke: lookupAccount,
};
- •Attach the tool to the generator and force the model to decide when to call it. In production, this is where you keep the model honest by giving it only the tools it should use.
import { OpenAIChatGenerator } from "@haystack/openai";
import { ChatMessage } from "@haystack/core";
const generator = new OpenAIChatGenerator({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
tools: [accountLookupTool],
});
const messages = [
ChatMessage.fromSystem(
"You are a banking assistant. Use tools when you need account data."
),
ChatMessage.fromUser("What is the balance for account A-1001?"),
];
const response = await generator.run({ messages });
console.log(JSON.stringify(response.replies[0], null, 2));
- •Build the tool loop so the model can ask for data, receive the result, and produce a final answer. This is the core pattern for advanced agent work; one model turn is rarely enough.
import { ChatMessage } from "@haystack/core";
const first = await generator.run({ messages });
const assistantReply = first.replies[0];
if (!assistantReply.toolCalls?.length) {
console.log(assistantReply.content);
} else {
const toolResults = await Promise.all(
assistantReply.toolCalls.map(async (call) => ({
toolCallId: call.id,
content: await lookupAccount(call.arguments as { accountId: string }),
}))
);
const followUpMessages = [
...messages,
assistantReply,
ChatMessage.fromToolResults(toolResults),
];
const finalResponse = await generator.run({ messages: followUpMessages });
console.log(finalResponse.replies[0].content);
}
- •Add guardrails before exposing tools to real users. In bank and insurance flows, you want allowlists, argument validation, and deterministic failure modes instead of letting the model improvise.
function assertAllowedAccountId(accountId: string) {
if (!/^A-\d{4}$/.test(accountId)) {
throw new Error("invalid_account_id_format");
}
}
async function guardedLookupAccount(args: { accountId: string }) {
assertAllowedAccountId(args.accountId);
return lookupAccount(args);
}
export const guardedAccountLookupTool = {
name: "lookup_account",
description: "Look up a customer account by account ID.",
parametersSchema: accountLookupInput,
invoke: guardedLookupAccount,
};
Testing It
Run the script with OPENAI_API_KEY set and ask for an account that exists, like A-1001. You should see the model call lookup_account, then answer with the returned balance instead of guessing.
Test an invalid ID like ABC123 and confirm your guardrail rejects it before any downstream logic runs. Also test an unknown but valid-format ID such as A-9999; your tool should return a structured not-found payload, not crash.
If you want confidence beyond manual testing, log every tool call with its arguments and response payload. That gives you an audit trail for debugging model behavior and for compliance review later.
Next Steps
- •Add a second tool for policy lookup or claims status so the model can choose between multiple backends.
- •Wrap tool execution in tracing and structured logging so every decision is observable.
- •Move from single-turn execution to a reusable agent loop that handles retries, timeouts, and tool errors cleanly.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit