LlamaIndex Tutorial (TypeScript): adding tool use for beginners
This tutorial shows you how to add tool use to a LlamaIndex TypeScript agent so it can call external functions instead of guessing. You need this when your app has real actions like looking up an order, fetching a policy quote, or querying an internal API.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
tsconfig.json - •LlamaIndex TypeScript package:
- •
npm install llamaindex
- •
- •An OpenAI API key:
- •
export OPENAI_API_KEY="your-key"
- •
- •A terminal that can run
ts-nodeortsx - •Basic familiarity with async/await and TypeScript classes
Step-by-Step
- •Start with a minimal TypeScript file and create an LLM-backed agent. The key idea is that the agent will decide when to call tools based on the user prompt.
import { OpenAI, FunctionTool, ReActAgent } from "llamaindex";
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
async function main() {
console.log("Agent ready:", llm.model);
}
main().catch(console.error);
- •Define a real tool function. Keep the function small, deterministic, and easy to test; in production this would usually wrap a database query or internal service call.
import { OpenAI, FunctionTool, ReActAgent } from "llamaindex";
async function getPolicyStatus(policyId: string) {
const mockPolicies: Record<string, string> = {
"POL-1001": "Active",
"POL-1002": "Pending payment",
"POL-1003": "Cancelled",
};
return {
policyId,
status: mockPolicies[policyId] ?? "Not found",
};
}
- •Wrap that function as a LlamaIndex tool with a clear description. The description matters because the model uses it to decide when the tool is relevant.
import { OpenAI, FunctionTool, ReActAgent } from "llamaindex";
const policyStatusTool = FunctionTool.from(
async ({ policyId }: { policyId: string }) => {
const mockPolicies: Record<string, string> = {
"POL-1001": "Active",
"POL-1002": "Pending payment",
"POL-1003": "Cancelled",
};
return {
policyId,
status: mockPolicies[policyId] ?? "Not found",
};
},
{
name: "get_policy_status",
description: "Look up the current status of an insurance policy by policy ID.",
}
);
- •Create a ReAct agent and pass the tool into it. ReAct is the simplest path for beginners because it makes tool usage explicit and easy to debug.
import { OpenAI, FunctionTool, ReActAgent } from "llamaindex";
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new ReActAgent({
llm,
tools: [policyStatusTool],
});
- •Ask a question that should trigger the tool, then print the response. If everything is wired correctly, the model will call your function and use its result in the final answer.
async function main() {
const response = await agent.chat({
message: "What is the status of policy POL-1002?",
});
console.log(response.message.content);
}
main().catch(console.error);
- •Put it all together in one executable file. This version includes everything you need in one place so you can run it directly and verify tool calling end-to-end.
import { OpenAI, FunctionTool, ReActAgent } from "llamaindex";
const policyStatusTool = FunctionTool.from(
async ({ policyId }: { policyId: string }) => {
const mockPolicies: Record<string, string> = {
"POL-1001": "Active",
"POL-1002": "Pending payment",
"POL-1003": "Cancelled",
};
return {
policyId,
status: mockPolicies[policyId] ?? "Not found",
};
},
{
name: "get_policy_status",
description: "Look up the current status of an insurance policy by policy ID.",
}
);
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new ReActAgent({
llm,
tools: [policyStatusTool],
});
async function main() {
const response = await agent.chat({
message: "What is the status of policy POL-1002?",
});
console.log(response.message.content);
}
main().catch(console.error);
Testing It
Run the file with your preferred TypeScript runner:
npx tsx index.ts
You should see a response that mentions Pending payment for POL-1002. If you ask for POL-9999, the agent should report Not found, which confirms it is actually using your tool instead of inventing an answer.
If you want to confirm tool selection more aggressively, change the prompt to something like “Check policy POL-1001 and explain what you found.” The final answer should still be grounded in the tool output.
Next Steps
- •Add multiple tools, such as
get_claim_statusandsearch_customer_profile - •Replace mock data with a real HTTP client or database query layer
- •Learn how to add structured outputs so tool results are easier to validate
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit