How to Fix 'tool calling failure' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
tool-calling-failurellamaindextypescript

What the error means

tool calling failure in LlamaIndex TypeScript usually means the model tried to invoke a tool, but the agent/runtime could not produce a valid tool call or could not execute it. In practice, this shows up when you wire an LLM that does not support function/tool calling, pass tools in the wrong shape, or use an agent class that expects structured tool calls but gets plain text back.

You’ll typically see it during OpenAIAgent, FunctionAgent, or any workflow that depends on tool_calls metadata from the model response.

The Most Common Cause

The #1 cause is using a model that does not support native tool/function calling, or using the wrong provider configuration for the model you picked.

In LlamaIndex TypeScript, OpenAIAgent expects the underlying LLM to emit structured tool calls. If your model only returns text, you’ll get failures like:

  • Error: tool calling failure
  • No tool calls found in LLM response
  • Failed to parse tool call arguments

Wrong vs right

Broken patternFixed pattern
Using a plain completion model with an agent that needs tool callsUse a model/provider that supports tool calling
Passing tools but not enabling a compatible chat modelUse OpenAI, Anthropic, or another supported chat model wrapper
Expecting free-form text to become a tool callLet the model emit structured function/tool calls
// ❌ Broken: model may not support native tool calling
import { OpenAIAgent } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI({
  model: "gpt-3.5-turbo-instruct", // completion-style, bad fit for tool calling
});

const agent = new OpenAIAgent({
  tools: [myTool],
  llm,
});
// ✅ Fixed: use a chat model with tool calling support
import { OpenAIAgent } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI({
  model: "gpt-4o-mini",
});

const agent = new OpenAIAgent({
  tools: [myTool],
  llm,
});

If you’re using Anthropic or another provider, make sure the exact wrapper supports tools in the version you installed. A lot of “tool calling failure” bugs are just provider mismatch bugs.

Other Possible Causes

1) Tool schema is invalid or too loose

If your function parameters are not serializable into a valid schema, the model may produce arguments LlamaIndex cannot parse.

// ❌ Broken: ambiguous input shape
const myTool = {
  name: "lookup_policy",
  description: "Look up policy details",
  fn: async (input: any) => {
    return `Policy ${input.policyId}`;
  },
};
// ✅ Fixed: explicit typed args
const myTool = {
  name: "lookup_policy",
  description: "Look up policy details",
  fn: async ({ policyId }: { policyId: string }) => {
    return `Policy ${policyId}`;
  },
};

If you can define a Zod schema for your tool input, do it. Tool calling works better when argument names and types are explicit.


2) The prompt is overriding tool usage

Sometimes the system prompt tells the assistant to answer directly, which fights against agent behavior.

// ❌ Broken prompt pushes direct answers only
const systemPrompt = `
Answer everything directly.
Never call tools.
`;
// ✅ Fixed prompt allows tool use
const systemPrompt = `
Use available tools when needed.
If external data is required, call the appropriate tool.
`;

This matters more than people think. If your prompt says “do not use tools,” don’t be surprised when OpenAIAgent never gets a valid function call.


3) Tool names collide or are malformed

Two tools with the same name, or names with unsupported characters, can break routing.

// ❌ Broken: duplicate / messy names
const tools = [
  { name: "get-policy", fn: async () => "a" },
  { name: "get-policy", fn: async () => "b" },
];
// ✅ Fixed: unique stable names
const tools = [
  { name: "get_policy_details", fn: async () => "a" },
  { name: "list_policy_documents", fn: async () => "b" },
];

Keep names stable across deployments. If you rename tools often, cached prompts and traces become harder to reason about.


4) Version mismatch between LlamaIndex packages

This one bites teams hard. You install one version of llamaindex and another version of provider bindings like @llamaindex/openai, then runtime behavior changes.

{
  "dependencies": {
    "llamaindex": "^0.x.x",
    "@llamaindex/openai": "^0.y.y"
  }
}

Make sure these packages are compatible and upgrade them together. Tool-calling APIs changed across releases, and older wrappers may expect different agent interfaces such as FunctionAgent vs older patterns around OpenAIAgent.

How to Debug It

  1. Print the raw LLM response

    • Check whether the model returned tool_calls, JSON arguments, or just plain text.
    • If there’s no structured call metadata, this is usually a model/provider issue.
  2. Verify the exact agent class

    • Confirm whether you’re using OpenAIAgent, FunctionAgent, or a workflow built on top of them.
    • Some classes expect native tool calling; others can work differently depending on provider support.
  3. Reduce to one simple tool

    • Replace all your tools with one deterministic function like echo_tool.
    • If that works, your original issue is likely schema/name/prompt related.
  4. Log versions and provider config

    • Print package versions for:
      • llamaindex
      • provider package like @llamaindex/openai
      • your SDK version if applicable
    • Also log:
      • model name
      • temperature
      • system prompt
      • full list of registered tools

Example debug snippet:

console.log({
  model,
  temperature,
  tools: tools.map((t) => t.name),
});

If you see something like:

  • Model is completion-only
  • Tools array is empty at runtime
  • Tool names are duplicated

you’ve found the bug path.

Prevention

  • Use chat models with native tool support for agents that need structured calls.
  • Define every tool with explicit input types and stable names.
  • Keep LlamaIndex core and provider packages on compatible versions; upgrade them together.
  • Add one integration test that asserts an agent can call a real tool and return structured output.

If you want fewer production surprises, treat tool calling like an API contract. The LLM has to emit valid structure, your schema has to accept it, and your runtime has to execute it without ambiguity.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides