LlamaIndex Tutorial (TypeScript): adding tool use for intermediate developers
This tutorial shows you how to add tool use to a TypeScript LlamaIndex agent so it can call external functions instead of only answering from retrieved context. You need this when your assistant has to do real work like fetching account data, looking up policy status, or calculating something from live inputs.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step - •These packages:
- •
llamaindex - •
dotenv
- •
- •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with async/await and TypeScript types
Install the dependencies:
npm install llamaindex dotenv
npm install -D typescript ts-node @types/node
Step-by-Step
- •Create a tool function that does one job well.
Keep tools narrow and deterministic. In production, a tool should wrap a single capability like “get customer balance” or “look up claim status,” not an entire workflow.
import "dotenv/config";
import { z } from "zod";
import { FunctionTool } from "llamaindex";
const getPolicyStatus = FunctionTool.from(
async ({ policyId }: { policyId: string }) => {
const mockDatabase = {
POL-1001: { status: "active", premium: 120.5 },
POL-1002: { status: "lapsed", premium: 88.0 },
};
return mockDatabase[policyId as keyof typeof mockDatabase] ?? {
status: "not_found",
premium: null,
};
},
{
name: "get_policy_status",
description: "Fetch the current status and premium for a policy by ID.",
parameters: z.object({
policyId: z.string().describe("The insurance policy ID"),
}),
}
);
- •Build an agent that is allowed to use tools.
This is the key difference from plain chat or retrieval. The agent gets a tool list and decides when to call one based on the user prompt.
import { OpenAI, ReActAgent } from "llamaindex";
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new ReActAgent({
tools: [getPolicyStatus],
llm,
});
- •Call the agent with a question that requires the tool.
Use a prompt that clearly needs external data. If the tool is wired correctly, the model should call it instead of guessing.
async function main() {
const response = await agent.chat({
message: "What is the status of policy POL-1001?",
});
console.log(response.message.content);
}
main().catch(console.error);
- •Add a second tool so you can see routing behavior.
Once one tool works, add another with a different purpose. This helps you verify that the agent chooses between tools based on intent, not just keyword matching.
import { FunctionTool } from "llamaindex";
import { z } from "zod";
const calculatePremiumIncrease = FunctionTool.from(
async ({ currentPremium, percent }: { currentPremium: number; percent: number }) => {
const increased = currentPremium + (currentPremium * percent) / 100;
return {
currentPremium,
percent,
newPremium: Number(increased.toFixed(2)),
};
},
{
name: "calculate_premium_increase",
description: "Calculate a premium after applying a percentage increase.",
parameters: z.object({
currentPremium: z.number().describe("Current premium amount"),
percent: z.number().describe("Increase percentage"),
}),
}
);
Then include both tools in the agent:
const agentWithTwoTools = new ReActAgent({
tools: [getPolicyStatus, calculatePremiumIncrease],
llm,
});
- •Ask for an answer that forces tool selection.
A good test prompt should make it obvious which tool was used. If you ask for a premium adjustment, the agent should route to the calculator; if you ask for policy state, it should route to the lookup tool.
async function testTools() {
const statusResponse = await agentWithTwoTools.chat({
message: "Check policy POL-1002 and tell me its status.",
});
const premiumResponse = await agentWithTwoTools.chat({
message: "If my current premium is 120.5 and I increase it by 15%, what is the new amount?",
});
console.log("STATUS:", statusResponse.message.content);
console.log("PREMIUM:", premiumResponse.message.content);
}
testTools().catch(console.error);
Testing It
Run the script with npx ts-node your-file.ts or compile it with tsc and run the output with Node. For the policy lookup prompt, you should see an answer grounded in the mock database, not a generic guess.
For the premium calculation prompt, verify that the returned number matches your math exactly. If you want more confidence, log inside each tool function and confirm only the relevant one fires per request.
If the agent answers without using tools, check three things:
- •
OPENAI_API_KEYis set - •The tool descriptions are specific enough
- •Your prompt actually requires external data or computation
Next Steps
- •Add structured outputs so your tool results come back in strict JSON
- •Wrap real APIs instead of mock objects, then add retries and timeouts
- •Put authorization checks inside each tool before returning sensitive data
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit