How to Fix 'tool calling failure' in LangChain (TypeScript)
What “tool calling failure” actually means
This error usually means the model returned a response that LangChain could not parse into a valid tool call. In practice, it shows up when you wire up bindTools(), toolCalling, or an agent executor, but the model replies with plain text, malformed JSON, or a tool name that does not match any registered tool.
You’ll see this most often with ChatOpenAI, ChatAnthropic, or any provider where the model supports tools but your prompt, schema, or runtime setup is off.
The Most Common Cause — wrong tool binding or unsupported model behavior
The #1 cause is simple: you asked LangChain to do tool calling, but the model path you used does not reliably return structured tool calls.
A common mistake is using a normal chat completion flow and expecting tools to work automatically. Another version is binding tools but then calling .invoke() in a way that bypasses the agent/tool execution loop.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Tool bound, but no agent loop | Use an agent or runnable that actually executes tool calls |
| Model returns text instead of structured tool call | Use a model that supports tool calling and a proper schema |
| Tool name mismatch | Register the exact same tool name in code and prompt |
// ❌ Broken: binds tools but just invokes the model directly
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";
const weatherTool = new DynamicStructuredTool({
name: "get_weather",
description: "Get weather by city",
schema: z.object({
city: z.string(),
}),
func: async ({ city }) => `Weather in ${city}: sunny`,
});
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const llmWithTools = llm.bindTools([weatherTool]);
const res = await llmWithTools.invoke("What's the weather in Paris?");
console.log(res);
// ✅ Fixed: use an agent executor so LangChain can handle tool calls
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const weatherTool = new DynamicStructuredTool({
name: "get_weather",
description: "Get weather by city",
schema: z.object({
city: z.string(),
}),
func: async ({ city }) => `Weather in ${city}: sunny`,
});
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });
const tools = [weatherTool];
const prompt = ChatPromptTemplate.fromMessages([
["system", "Use tools when needed."],
["human", "{input}"],
]);
const agent = await createToolCallingAgent({
llm,
tools,
prompt,
});
const executor = new AgentExecutor({
agent,
tools,
});
const result = await executor.invoke({
input: "What's the weather in Paris?",
});
console.log(result);
If you are using bindTools(), make sure the downstream code actually understands tool calls. Directly invoking a bound chat model is not enough in many setups.
Other Possible Causes
1) Tool schema does not match what the model sends
If your schema says city, but the model sends location, you’ll get parsing failures or silent retries ending in:
- •
Error: Tool call arguments are invalid - •
OutputParserException - •
tool calling failure
// ❌ Broken
schema: z.object({
location: z.string(),
});
// ✅ Fixed
schema: z.object({
city: z.string(),
});
2) The tool name in prompts does not match the registered tool name
LangChain matches on exact names. If your prompt says weather_lookup but your code registers get_weather, the agent may hallucinate a tool call that cannot be resolved.
// ❌ Broken
name: "get_weather"
// prompt mentions:
"Call weather_lookup if needed."
// ✅ Fixed
name: "weather_lookup"
// and keep prompt consistent
3) Model/provider does not support native tool calling in your configuration
Some models need specific parameters or versions. For OpenAI models, use a current function/tool-capable model. For Anthropic, make sure you're using a supported chat model and current SDK integration.
// ❌ Risky config
new ChatOpenAI({ modelName: "gpt-3.5-turbo" });
// ✅ Better
new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });
4) Your prompt encourages free-form answers instead of structured calls
If you tell the model “answer naturally” and also expect a tool call, it may skip the tool entirely.
// ❌ Broken system message
"You are a helpful assistant. Answer directly."
// ✅ Better system message
"You are a helpful assistant. Use tools whenever external data is required."
How to Debug It
- •
Inspect the raw model output
- •Log the final AI message before LangChain parses it.
- •Look for plain text where you expected
tool_calls.
- •
Verify the exact error class
- •Common ones include:
- •
OutputParserException - •
BadRequestError - •
Error: Tool calling failure
- •
- •The class tells you whether this is a schema issue, provider issue, or orchestration issue.
- •Common ones include:
- •
Check your tool registration
- •Confirm:
- •tool name matches exactly
- •schema fields match exactly
- •tool is included in the executor/agent setup
- •Confirm:
- •
Reduce to one known-good path
- •Start with one simple tool.
- •Use
temperature: 0. - •Remove extra prompts and middleware.
- •If it works there, add complexity back one piece at a time.
A useful debug pattern is to print both what LangChain received and what your tool expects:
console.log("TOOLS:", tools.map(t => t.name));
console.log("INPUT:", input);
If you’re using OpenAI responses directly through LangChain, also inspect message metadata for malformed output:
const result = await executor.invoke({ input });
console.dir(result, { depth: null });
Prevention
- •
Keep tool schemas small and explicit.
- •Prefer required fields with clear names like
city,policy_id, orclaim_number.
- •Prefer required fields with clear names like
- •
Use one agent pattern consistently.
- •If you need tools, use an actual agent executor like
createToolCallingAgentplusAgentExecutor.
- •If you need tools, use an actual agent executor like
- •
Pin supported models and test them in CI.
- •A change from one model version to another can break native tool calling without changing your TypeScript code.
If you want this error to stop showing up randomly, treat tools as structured contracts, not prompts. In LangChain TypeScript, most “tool calling failure” bugs come from mismatched expectations between the model output and what your runtime can actually execute.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit