LlamaIndex Tutorial (TypeScript): handling async tools for beginners

By Cyprian AaronsUpdated 2026-04-21
llamaindexhandling-async-tools-for-beginnerstypescript

This tutorial shows you how to wire an async tool into a LlamaIndex TypeScript agent and call it correctly from an async workflow. You need this when your tool hits a database, calls an HTTP API, or does anything that returns a Promise instead of a plain value.

What You'll Need

  • Node.js 18+
  • A TypeScript project with tsconfig.json
  • These packages:
    • llamaindex
    • dotenv
  • An OpenAI API key set in your environment
  • Basic familiarity with async / await
  • A terminal that can run TypeScript, either through tsx, ts-node, or a build step

Step-by-Step

  1. Install the dependencies and set your environment variable.

    The LlamaIndex TypeScript SDK expects a chat model behind the agent, so make sure your OpenAI key is available before you run anything. I’m using tsx here because it keeps the example simple and executable.

npm install llamaindex dotenv
npm install -D tsx typescript @types/node
export OPENAI_API_KEY="your-key-here"
  1. Create an async tool function.

    The important part is that the function returns a Promise<string> and does real async work with await. In production, this could be a fetch call, database query, or internal service lookup.

// src/tools.ts
export async function getAccountStatus(accountId: string): Promise<string> {
  await new Promise((resolve) => setTimeout(resolve, 500));

  const statusMap: Record<string, string> = {
    "1001": "active",
    "1002": "pending_review",
    "1003": "closed",
  };

  return statusMap[accountId] ?? "unknown_account";
}
  1. Wrap that function in a LlamaIndex tool.

    LlamaIndex tools need a name, description, parameter schema, and an async call method. Keep the description specific; the agent uses it to decide when to invoke the tool.

// src/accountTool.ts
import { FunctionTool } from "llamaindex";
import { getAccountStatus } from "./tools";

export const accountStatusTool = FunctionTool.from(
  async ({ accountId }: { accountId: string }) => {
    const status = await getAccountStatus(accountId);
    return `Account ${accountId} is ${status}`;
  },
  {
    name: "get_account_status",
    description: "Get the current status for a customer account by account ID.",
    parameters: {
      type: "object",
      properties: {
        accountId: {
          type: "string",
          description: "The customer account ID",
        },
      },
      required: ["accountId"],
    },
  }
);
  1. Build an agent that can use the tool.

    This is where beginners usually trip up: the tool itself is async, but the agent call also needs to be awaited. If you forget either one, you’ll end up with unresolved Promises or incomplete output.

// src/index.ts
import "dotenv/config";
import { OpenAI, ReActAgent } from "llamaindex";
import { accountStatusTool } from "./accountTool";

async function main() {
  const llm = new OpenAI({
    model: "gpt-4o-mini",
    temperature: 0,
  });

  const agent = new ReActAgent({
    tools: [accountStatusTool],
    llm,
  });

  const response = await agent.chat({
    message: "Check account 1002 and tell me its status.",
  });

  console.log(response.message);
}

main().catch(console.error);
  1. Run it and confirm the tool is being called.

    When it works, the model should route the request to your tool and then return a natural-language answer based on the tool result. If you want to confirm execution order, add logs inside getAccountStatus or inside the tool wrapper.

npx tsx src/index.ts
// Optional debugging tweak in src/tools.ts
export async function getAccountStatus(accountId: string): Promise<string> {
  console.log("Fetching status for:", accountId);
  await new Promise((resolve) => setTimeout(resolve, 500));

  const statusMap: Record<string, string> = {
    "1001": "active",
    "1002": "pending_review",
    "1003": "closed",
  };

  return statusMap[accountId] ?? "unknown_account";
}

Testing It

Run the script with an input that clearly maps to your tool, like “Check account 1002.” You should see your debug log first, then a final assistant response that includes pending_review. If you get an error about missing credentials, verify OPENAI_API_KEY is set in the same shell session where you run tsx.

If the agent answers without calling the tool, tighten the tool description so it’s unambiguous. For example, mention exactly what kind of lookup it performs and what parameter it expects. Also make sure your parameter schema matches the argument shape passed into the async function.

Next Steps

  • Add more tools with different async backends, like PostgreSQL or REST APIs.
  • Learn how to chain multiple tools in one ReAct agent.
  • Add Zod validation around tool inputs before they reach your business logic.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides