How to Fix 'tool calling failure during development' in LangChain (TypeScript)
When LangChain throws tool calling failure during development, it usually means the model produced a tool call that LangChain could not validate or execute. In TypeScript, this shows up most often while wiring bindTools(), defining tool schemas, or passing messages through an agent loop.
The error is rarely “random.” It usually means one of three things: the model isn’t actually tool-enabled, the tool schema doesn’t match what the model emitted, or your message flow dropped the assistant/tool messages LangChain needs to continue.
The Most Common Cause
The #1 cause is a mismatch between the tool definition and what the model is allowed to call. In LangChain TypeScript, this often happens when you define a tool but forget to bind it to the chat model, or you bind it but then invoke a model that doesn’t support tool calling.
Here’s the broken pattern:
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ city }) => `Weather in ${city} is 21C`,
{
name: "get_weather",
description: "Get weather by city",
schema: z.object({
city: z.string(),
}),
}
);
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0,
});
// Broken: tool exists, but model is not bound to it
const response = await llm.invoke("What's the weather in London?");
console.log(response);
And here’s the fixed pattern:
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ city }) => `Weather in ${city} is 21C`,
{
name: "get_weather",
description: "Get weather by city",
schema: z.object({
city: z.string(),
}),
}
);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const llmWithTools = llm.bindTools([getWeather]);
const response = await llmWithTools.invoke("What's the weather in London?");
console.log(response);
A few important details:
- •Use a model that supports tool calling.
- •Call
bindTools([...])beforeinvoke(). - •Make sure your tool schema matches the shape you expect.
If you skip this, you’ll often see errors like:
- •
Error: Tool calling failed - •
BadRequestError: Invalid tool call - •
AIMessage expected an array of ToolCall objects - •
No tools provided for requested function call
Other Possible Causes
| Cause | What it looks like | Fix |
|---|---|---|
| Tool schema mismatch | Model emits args that fail Zod validation | Align schema with actual input |
| Wrong message type handling | Agent loop loses AIMessage.tool_calls or ToolMessage | Preserve message history exactly |
| Unsupported model | Model returns plain text instead of structured tool calls | Switch to a tool-capable chat model |
| Bad tool name/duplicate names | Two tools share the same name or name doesn’t match prompt expectations | Use unique, stable names |
1) Schema mismatch
If your schema says city is required but the model emits { location: "London" }, LangChain will reject it.
// Broken
schema: z.object({
city: z.string(),
})
// Fixed
schema: z.object({
city: z.string().optional(),
location: z.string().optional(),
})
Better yet, make the schema reflect exactly what your app needs. Don’t overconstrain unless you must.
2) Not passing assistant/tool messages back into the loop
This breaks agent-style flows. The model calls a tool, but your code only keeps user messages.
// Broken
messages = [new HumanMessage("Check order status")];
const result = await agent.invoke({ messages });
// Fixed
messages.push(result); // keep AIMessage with tool_calls
messages.push(toolResult); // keep ToolMessage too
const next = await agent.invoke({ messages });
If you’re building a custom loop, preserve:
- •
HumanMessage - •
AIMessagewithtool_calls - •
ToolMessagewith matchingtool_call_id
3) Using a non-tool-capable model
Some models can chat but not reliably emit structured calls. In practice, this means you’ll see plain text where LangChain expects a function call.
// Risky for tools
new ChatOpenAI({ model: "gpt-3.5-turbo" })
// Safer for tools
new ChatOpenAI({ model: "gpt-4o-mini" })
Check your provider docs too. Not every deployment variant supports function/tool calling equally.
4) Duplicate or unstable tool names
If two tools are named search, or you generate names dynamically per request, LangChain can route incorrectly.
// Broken
const t1 = tool(fn1, { name: "search", description: "...", schema });
const t2 = tool(fn2, { name: "search", description: "...", schema });
// Fixed
const t1 = tool(fn1, { name: "customer_search", description: "...", schema });
const t2 = tool(fn2, { name: "policy_search", description: "...", schema });
Tool names should be deterministic and unique across a run.
How to Debug It
- •
Inspect the raw AI message
- •Log the full response before any parsing.
- •Look for
tool_calls, invalid args, or plain text where JSON-like output was expected.
- •
Verify the exact error class
- •Common ones include:
- •
BadRequestError - •
ToolInvocationError - •Zod validation errors from your schema
- •
- •If it fails before execution, it’s usually binding/model/schema related.
- •If it fails after execution starts, it’s usually message-loop related.
- •Common ones include:
- •
Print your bound tools
- •Confirm you called
.bindTools([...])on the same instance you invoke. - •Make sure there are no empty arrays or stale references.
- •Confirm you called
console.log(llm);
console.log(llmWithTools);
- •Reduce to one tool and one turn
- •Remove all middleware, memory layers, and extra prompts.
- •Test with one simple tool and one direct user prompt.
- •If that works, add complexity back one layer at a time.
Prevention
- •Bind tools immediately after creating the chat model, and invoke only the bound instance.
- •Keep schemas strict but realistic; validate against actual payloads from logs.
- •Use stable message handling in agent loops:
- •keep assistant messages with
tool_calls - •return matching
ToolMessageobjects - •don’t reconstruct conversation state loosely
- •keep assistant messages with
If you’re seeing this in production-like development code, assume it’s not LangChain “being flaky.” It’s almost always a contract mismatch between model output, tool schema, and message flow. Fix those three layers first and the error usually disappears fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit