LlamaIndex Tutorial (TypeScript): handling async tools for advanced developers
This tutorial shows you how to build a LlamaIndex TypeScript agent that can call async tools correctly, wait for their results, and keep the tool contract clean under real-world latency. You need this when your tools hit databases, internal APIs, or external services and you cannot afford race conditions, half-finished responses, or brittle agent behavior.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project with
tsconfig.json - •
@llamaindex/core - •
@llamaindex/openai - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LlamaIndex agents and tools
- •A terminal capable of running
tsxorts-node
Step-by-Step
- •Start by installing the packages and setting up your environment. The important part here is that your model and tools are both wired for async execution from the beginning.
npm install @llamaindex/core @llamaindex/openai
npm install -D typescript tsx @types/node
- •Create an async tool that returns structured data after awaiting an external call. In production, this could be a payment lookup, policy status check, or customer profile fetch; here we simulate latency with
setTimeout.
import { tool } from "@llamaindex/core/tools";
import { z } from "zod";
export const getAccountBalance = tool(
async ({ accountId }: { accountId: string }) => {
await new Promise((resolve) => setTimeout(resolve, 500));
return {
accountId,
currency: "USD",
balance: 18420.55,
status: "active",
};
},
{
name: "get_account_balance",
description: "Fetch the current balance for a bank account.",
parameters: z.object({
accountId: z.string().describe("Bank account ID"),
}),
}
);
- •Create a second async tool so the agent has to choose between multiple actions. This is where async handling matters most: the agent may call one tool, then another, and it must wait for each result before deciding what to do next.
import { tool } from "@llamaindex/core/tools";
import { z } from "zod";
export const getPolicyStatus = tool(
async ({ policyNumber }: { policyNumber: string }) => {
await new Promise((resolve) => setTimeout(resolve, 700));
return {
policyNumber,
active: true,
renewalDate: "2026-01-15",
riskTier: "standard",
};
},
{
name: "get_policy_status",
description: "Fetch the current status of an insurance policy.",
parameters: z.object({
policyNumber: z.string().describe("Insurance policy number"),
}),
}
);
- •Wire both tools into an agent using an OpenAI chat model. The key detail is that the agent runner handles the async boundary for you, so your tool functions can stay normal
asyncfunctions instead of callback-style code.
import { OpenAI } from "@llamaindex/openai";
import { AgentWorkflow } from "@llamaindex/core/agent";
import { getAccountBalance } from "./tools/getAccountBalance";
import { getPolicyStatus } from "./tools/getPolicyStatus";
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new AgentWorkflow({
llm,
tools: [getAccountBalance, getPolicyStatus],
});
async function main() {
const response = await agent.run({
input:
"Check account A-1001 balance and policy P-2002 status, then summarize both.",
});
console.log(response);
}
main().catch(console.error);
- •If you need tighter control, wrap your tool logic with explicit validation and error handling. This is the pattern I use when tools talk to internal systems because failures should come back as clean tool errors, not broken agent runs.
import { tool } from "@llamaindex/core/tools";
import { z } from "zod";
export const lookupCustomer = tool(
async ({ customerId }: { customerId: string }) => {
try {
if (customerId.trim().length < 3) {
throw new Error("customerId is too short");
}
await new Promise((resolve) => setTimeout(resolve, 300));
return {
customerId,
segment: "premium",
lastLoginAt: "2026-04-20T10:15:00Z",
};
} catch (error) {
return {
error: error instanceof Error ? error.message : "Unknown error",
};
}
},
{
name: "lookup_customer",
description: "Fetch a customer profile safely.",
parameters: z.object({
customerId: z.string(),
}),
}
);
- •Run the workflow against a realistic prompt and inspect whether the model waits for each async result before answering. You want to see one complete answer that reflects both tool outputs, not partial speculation.
import { OpenAI } from "@llamaindex/openai";
import { AgentWorkflow } from "@llamaindex/core/agent";
import { getAccountBalance } from "./tools/getAccountBalance";
import { getPolicyStatus } from "./tools/getPolicyStatus";
const llm = new OpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const agent = new AgentWorkflow({
llm,
tools: [getAccountBalance, getPolicyStatus],
});
async function main() {
const response = await agent.run({
input:
"For account A-1001 and policy P-2002, give me a concise compliance-ready summary.",
});
console.log(JSON.stringify(response, null, 2));
}
main().catch(console.error);
Testing It
Run the script with OPENAI_API_KEY exported in your shell. A successful run should show the agent calling both tools and returning a final answer that includes the balance and policy status without hanging or returning undefined fields.
Test one prompt at a time first, then try a multi-step prompt like “compare account A-1001 with policy P-2002 and explain any operational risk.” If your tool functions are truly async-safe, the output will stay deterministic even when you add latency inside each handler.
A good sanity check is to temporarily increase one timeout to two seconds. The final response should still complete correctly after waiting for the slower tool.
Next Steps
- •Add retries and circuit breakers around tool calls that hit external systems
- •Return strongly typed JSON payloads from every tool and validate them with Zod
- •Move from single-agent workflows to multi-agent orchestration when you need separation of duties
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit