LlamaIndex Tutorial (TypeScript): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
llamaindexadding-tool-use-for-advanced-developerstypescript

This tutorial shows how to wire tool use into a LlamaIndex TypeScript agent so it can call real functions instead of only generating text. You need this when your assistant must fetch live data, hit internal APIs, or execute business logic with controlled outputs.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with "type": "module" or compatible ESM setup
  • llamaindex installed
  • An OpenAI API key set in OPENAI_API_KEY
  • A basic understanding of LlamaIndex chat engines and agents
  • A tool you want the model to call, such as:
    • a database lookup
    • a pricing calculator
    • a policy status checker
    • an internal HTTP endpoint

Step-by-Step

  1. Install the package and set up your environment.
    Keep this in a clean project so you can verify the agent behavior without extra moving parts.
npm init -y
npm install llamaindex
npm install -D typescript tsx @types/node
  1. Create a typed tool function and wrap it with FunctionTool.
    The important part is giving the model a clear schema and a deterministic function body.
import { FunctionTool } from "llamaindex";

async function getPolicyStatus(policyId: string): Promise<string> {
  const mockDb: Record<string, string> = {
    "POL-1001": "active",
    "POL-1002": "pending_review",
    "POL-1003": "cancelled",
  };

  return mockDb[policyId] ?? "not_found";
}

export const policyStatusTool = FunctionTool.from(
  async ({ policyId }: { policyId: string }) => {
    const status = await getPolicyStatus(policyId);
    return JSON.stringify({ policyId, status });
  },
  {
    name: "get_policy_status",
    description: "Look up the current status of an insurance policy by policy ID.",
    parameters: {
      type: "object",
      properties: {
        policyId: {
          type: "string",
          description: "The insurance policy identifier, for example POL-1001.",
        },
      },
      required: ["policyId"],
      additionalProperties: false,
    },
  }
);
  1. Build an agent that is allowed to use tools.
    This is where tool use becomes real: the LLM can choose the function when the user asks for something that needs external data.
import { OpenAI, ReActAgent } from "llamaindex";
import { policyStatusTool } from "./policyStatusTool.js";

const llm = new OpenAI({
  model: "gpt-4o-mini",
});

const agent = new ReActAgent({
  llm,
  tools: [policyStatusTool],
});

const response = await agent.chat({
  message: "What's the status of policy POL-1002?",
});

console.log(response.message.content);
  1. Add a second tool to show how multi-tool routing works.
    In production, this is where you separate concerns like lookup, calculation, and formatting into different functions.
import { FunctionTool } from "llamaindex";

export const premiumQuoteTool = FunctionTool.from(
  async ({ age, coverage }: { age: number; coverage: number }) => {
    const baseRate = coverage * 0.012;
    const ageFactor = age > 50 ? 1.25 : 1;
    const premium = Math.round(baseRate * ageFactor * 100) / 100;

    return JSON.stringify({ age, coverage, premium });
  },
  {
    name: "calculate_premium_quote",
    description: "Calculate an estimated monthly premium quote.",
    parameters: {
      type: "object",
      properties: {
        age: { type: "number", description: "Applicant age in years" },
        coverage: { type: "number", description: "Coverage amount in USD" },
      },
      required: ["age", "coverage"],
      additionalProperties: false,
    },
  }
);
  1. Run both tools in one agent and ask targeted questions.
    Tool selection should now depend on the user prompt, not hardcoded routing in your app.
import { OpenAI, ReActAgent } from "llamaindex";
import { policyStatusTool } from "./policyStatusTool.js";
import { premiumQuoteTool } from "./premiumQuoteTool.js";

const llm = new OpenAI({ model: "gpt-4o-mini" });

const agent = new ReActAgent({
  llm,
  tools: [policyStatusTool, premiumQuoteTool],
});

const statusResult = await agent.chat({
  message: "Check policy POL-1001 and tell me if it's active.",
});

const quoteResult = await agent.chat({
  message: "Estimate a monthly premium for a 42-year-old with $250000 coverage.",
});

console.log("STATUS:", statusResult.message.content);
console.log("QUOTE:", quoteResult.message.content);
  1. Add a small executable entrypoint so you can test locally without wiring it into an app yet.
    This keeps the feedback loop tight while you tune prompts and tool schemas.
import { OpenAI, ReActAgent } from "llamaindex";
import { policyStatusTool } from "./policyStatusTool.js";
import { premiumQuoteTool } from "./premiumQuoteTool.js";

async function main() {
  const llm = new OpenAI({ model: "gpt-4o-mini" });
  const agent = new ReActAgent({
    llm,
    tools: [policyStatusTool, premiumQuoteTool],
  });

  const result = await agent.chat({
    message:
      process.env.TEST_PROMPT ??
      "What's the status of policy POL-1003?",
  });

  console.log(result.message.content);
}

main();

Testing It

Run the entrypoint with OPENAI_API_KEY set and confirm the response includes data that clearly came from your tool output, not just model guesswork. Try prompts that force different tools, like asking for a policy lookup versus a premium estimate.

If the model answers without calling the tool, tighten the tool description and make the user prompt more explicit. If tool calls fail, check your parameter schema first; most issues come from mismatched names or invalid JSON schema fields.

A good smoke test is to ask for an unknown policy ID and verify you get not_found, then ask for a known one like POL-1001 and verify you get active.

Next Steps

  • Add structured outputs so your tool responses are parsed into typed objects instead of raw JSON strings.
  • Wrap real HTTP calls with retries, timeouts, and circuit breakers before connecting tools to production systems.
  • Add authorization checks inside each tool so the agent cannot access data outside the current user’s scope.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides