LlamaIndex Tutorial (TypeScript): building custom tools for beginners
This tutorial shows how to build a custom tool in LlamaIndex TypeScript, register it with an agent, and call it from natural language. You’d use this when the built-in tools don’t fit your workflow and you need the agent to hit your own business logic, internal APIs, or database-safe helper functions.
What You'll Need
- •Node.js 18+ and npm
- •A TypeScript project with
tsconfig.json - •Packages:
- •
llamaindex - •
dotenv
- •
- •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with async/await and TypeScript types
Install the dependencies:
npm install llamaindex dotenv
Step-by-Step
- •First, create a small TypeScript project entry point and load your environment variables. The only key you need for this tutorial is the model key used by LlamaIndex.
import "dotenv/config";
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
model: "gpt-4o-mini",
});
console.log("LLM ready:", !!llm);
- •Define a custom tool as a plain async function with a clear input type. Keep the function deterministic and narrow in scope; tools should do one thing well, not become mini-applications.
import { tool } from "llamaindex";
type TicketInput = {
ticketId: string;
};
const getTicketStatus = tool(
async ({ ticketId }: TicketInput) => {
const statusMap: Record<string, string> = {
TCK-1001: "open",
TCK-1002: "waiting_on_customer",
TCK-1003: "resolved",
};
return {
ticketId,
status: statusMap[ticketId] ?? "not_found",
};
},
{
name: "get_ticket_status",
description: "Look up the current status of a support ticket by ticket ID.",
parameters: {
type: "object",
properties: {
ticketId: { type: "string", description: "The support ticket ID" },
},
required: ["ticketId"],
},
}
);
- •Build an agent and give it your tool. This is where LlamaIndex decides when to call your tool versus answering directly.
import { AgentWorkflow } from "llamaindex";
const agent = new AgentWorkflow({
llm,
tools: [getTicketStatus],
});
async function main() {
const result = await agent.run({
input: "What's the status of ticket TCK-1002?",
});
console.log(result.response);
}
main();
- •Add a second tool so you can see how custom tools scale in practice. A common pattern is to keep each tool focused on one domain action, then let the agent combine them.
import { tool } from "llamaindex";
type PolicyInput = {
policyNumber: string;
};
const getPolicySummary = tool(
async ({ policyNumber }: PolicyInput) => {
return {
policyNumber,
insuredName: "A. Moyo",
productType: "home_insurance",
active: true,
};
},
{
name: "get_policy_summary",
description: "Fetch a short summary for an insurance policy.",
parameters: {
type: "object",
properties: {
policyNumber: { type: "string" },
},
required: ["policyNumber"],
},
}
);
- •Register both tools in the same agent and ask a query that forces a choice. If you want reliable behavior, make the descriptions specific enough that the model can route requests correctly.
import { AgentWorkflow, OpenAI } from "llamaindex";
import "dotenv/config";
const llm = new OpenAI({ model: "gpt-4o-mini" });
const agent = new AgentWorkflow({
llm,
tools: [getTicketStatus, getPolicySummary],
});
async function main() {
const result = await agent.run({
input:
"Check policy POL-2048 first, then tell me whether ticket TCK-1003 is resolved.",
});
console.log(result.response);
}
main();
Testing It
Run the script with npx tsx index.ts or compile it with tsc first if that’s how your project is set up. Start with simple prompts like “What’s the status of ticket TCK-1001?” so you can confirm the agent is actually invoking your custom tool.
Then try prompts that should not use any tool, such as “Explain what a support ticket is,” and make sure the model answers directly. If the agent keeps missing the tool, tighten the tool description and make parameter names more explicit.
A good smoke test is to log inside each tool before returning data. In production, replace those logs with structured tracing so you can see which requests called which tools and why.
Next Steps
- •Add real I/O by replacing in-memory maps with database calls or internal REST endpoints.
- •Learn how to compose tools with retrieval so your agent can answer both structured lookups and document questions.
- •Add validation and error handling around tool inputs so bad user prompts don’t turn into broken downstream calls.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit