How to Fix 'tool calling failure during development' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
tool-calling-failure-during-developmentllamaindextypescript

If you’re seeing tool calling failure during development in LlamaIndex TypeScript, it usually means the model tried to invoke a tool but the agent couldn’t complete the tool-call round trip. In practice, this shows up when the tool schema is wrong, the tool handler throws, or the model/provider doesn’t support function calling the way your code expects.

The annoying part is that the stack trace often points at AgentRunner, OpenAIAgent, or FunctionTool, but the real bug is usually one layer below that.

The Most Common Cause

The #1 cause is a mismatch between the tool definition and what your handler actually accepts or returns.

In LlamaIndex TypeScript, a tool needs a valid schema and a handler that returns something serializable. If your handler expects arguments that don’t match the declared schema, or you return an object with circular references, the agent will fail during execution.

Here’s the broken pattern:

import { FunctionTool } from "llamaindex";

const getPolicyTool = FunctionTool.from(
  async ({ policyId }) => {
    // ❌ returns a non-serializable class instance / wrong shape
    return new Date();
  },
  {
    name: "get_policy",
    description: "Fetch policy by ID",
    parameters: {
      type: "object",
      properties: {
        id: { type: "string" }, // ❌ schema says `id`
      },
      required: ["id"],
    },
  }
);

And here’s the fixed version:

import { FunctionTool } from "llamaindex";

const getPolicyTool = FunctionTool.from(
  async ({ id }: { id: string }) => {
    const policy = await fetchPolicyById(id);

    // ✅ return plain JSON-serializable data
    return {
      id: policy.id,
      status: policy.status,
      premium: policy.premium,
    };
  },
  {
    name: "get_policy",
    description: "Fetch policy by ID",
    parameters: {
      type: "object",
      properties: {
        id: { type: "string" },
      },
      required: ["id"],
      additionalProperties: false,
    },
  }
);

The key difference is simple:

BrokenFixed
Schema says id, handler reads something elseSchema and handler use the same field
Returns class instances / complex objectsReturns plain JSON
No validation guardrailsExplicit parameter schema

If you see errors like:

  • Error: tool calling failure during development
  • Tool invocation failed
  • Invalid tool arguments
  • Failed to parse function call arguments

start here first.

Other Possible Causes

1. The provider/model does not support tool calling properly

Some models advertise tool use poorly, or you’re using a configuration that disables it.

// Broken
const llm = new OpenAI({
  model: "gpt-3.5-turbo", // often weaker for tool calling
});
// Better
const llm = new OpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

If you’re using Anthropic or another provider through LlamaIndex, confirm that the exact model supports tools in the SDK path you’re using.

2. Your agent is wired to tools, but the prompt never allows tool usage

If your system prompt discourages tools or your agent mode is wrong, you can get weird failures where the model produces malformed calls.

// Broken
const agent = new OpenAIAgent({
  llm,
  tools,
  systemPrompt: "Answer directly without using tools.",
});
// Better
const agent = new OpenAIAgent({
  llm,
  tools,
  systemPrompt:
    "Use tools when needed. If a user asks for live data, call the appropriate tool.",
});

Also check whether you are using OpenAIAgent versus another agent class that matches your provider and runtime expectations.

3. The tool throws at runtime

This is common when your handler depends on env vars, network access, or downstream APIs.

const getCustomerTool = FunctionTool.from(async ({ customerId }: { customerId: string }) => {
  const res = await fetch(`https://api.internal/customers/${customerId}`);

  if (!res.ok) {
    throw new Error("Customer API failed");
  }

  return await res.json();
}, {
  name: "get_customer",
});

If that API fails, LlamaIndex surfaces it as a tool-calling failure.

Fix it by returning structured errors instead of throwing blindly:

return {
  ok: false,
  error: "Customer API failed",
};

4. The args object is nested incorrectly

The model may emit one shape while your function expects another.

// Broken handler expects flat args
async ({ query }: { query: string }) => {}

But your schema defines nested input:

parameters: {
  type: "object",
  properties: {
    input: {
      type: "object",
      properties: {
        query: { type: "string" },
      },
    },
  },
}

Make them match exactly. Tool-call schemas are not forgiving.

How to Debug It

  1. Log the raw tool call payload

    • Print what LlamaIndex receives before invoking your function.
    • Check whether the argument names match your schema.
    • Look for missing required fields or nested shapes you didn’t expect.
  2. Test the handler outside the agent

    • Call the function directly with hardcoded args.
    • If this fails, it’s not an agent problem.
    • Example:
      await getPolicyTool.call({ id: "pol_123" });
      
  3. Reduce to one tool

    • Remove all other tools and keep only one.
    • This isolates routing issues from schema issues.
    • If one tool works alone but fails in a set, check naming collisions and descriptions.
  4. Check provider logs and SDK version

    • Confirm your LlamaIndex TypeScript package version.
    • Confirm your OpenAI/Anthropic SDK version matches what LlamaIndex expects.
    • Mismatched versions often produce opaque failures like:
      • Failed to parse assistant message
      • tool_calls missing
      • invalid_function_call_arguments

Prevention

  • Keep every tool schema explicit:

    • use required
    • use additionalProperties: false
    • keep argument names identical between schema and handler
  • Return plain JSON from every tool:

    • strings, numbers, booleans, arrays, objects
    • no class instances
    • no circular references
  • Add a unit test for each tool:

    expect(await getPolicyTool.call({ id: "pol_123" })).toMatchObject({
      id: "pol_123",
    });
    

If you want to stop seeing tool calling failure during development, treat every tool like an API boundary. Validate input strictly, return simple output, and don’t assume the model will guess your schema correctly.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides