How to Fix 'tool calling failure during development' in LangGraph (TypeScript)
If you’re seeing tool calling failure during development in LangGraph, the model tried to emit a tool call, but your graph could not execute it correctly. In practice, this usually shows up when you wire an LLM node to tools but the tool schema, tool binding, or graph state is wrong.
This is common during local development with TypeScript because the failure is often not in the model itself. It’s usually a mismatch between what AIMessage.tool_calls contains and what your ToolNode, reducer, or state shape expects.
The Most Common Cause
The #1 cause is a mismatch between the assistant message that contains tool calls and the ToolNode that executes them.
In LangGraph, the model must return an AIMessage with valid tool_calls, and the next node must be a ToolNode configured with the same tools. If you manually craft messages or forget to bind tools to the model, you’ll hit errors like:
- •
Error: Tool call failed - •
TypeError: Cannot read properties of undefined (reading 'name') - •
Invalid tool call: missing id - •
No tool found for name: getWeather
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Model not bound to tools | Model bound with .bindTools(tools) |
| Tool node not registered | new ToolNode(tools) added to graph |
| Returning plain text instead of tool-call message | Let the LLM emit structured tool calls |
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ city }) => `Weather in ${city}: sunny`,
{
name: "getWeather",
description: "Get weather by city",
schema: z.object({ city: z.string() }),
}
);
const llm = new ChatOpenAI({ model: "gpt-4o-mini" }); // ❌ not bound to tools
const tools = [getWeather];
const graph = new StateGraph(MessagesAnnotation)
.addNode("agent", async (state) => {
const response = await llm.invoke(state.messages);
return { messages: [response] };
})
.addNode("tools", new ToolNode(tools))
.addEdge("__start__", "agent")
.addEdge("agent", "tools"); // ❌ will fail when no valid tool_calls exist
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { AIMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ city }) => `Weather in ${city}: sunny`,
{
name: "getWeather",
description: "Get weather by city",
schema: z.object({ city: z.string() }),
}
);
const tools = [getWeather];
const llm = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools(tools);
const agent = async (state: typeof MessagesAnnotation.State) => {
const response = await llm.invoke(state.messages);
return { messages: [response] };
};
const graph = new StateGraph(MessagesAnnotation)
.addNode("agent", agent)
.addNode("tools", new ToolNode(tools))
.addEdge("__start__", "agent")
.addConditionalEdges("agent", (state) => {
const last = state.messages[state.messages.length - 1];
return last instanceof AIMessage && last.tool_calls?.length ? "tools" : "__end__";
})
.addEdge("tools", "agent");
The key difference is simple:
- •The model is explicitly bound to tools
- •The graph only routes to
ToolNodewhen there are actual tool calls - •The assistant message remains an
AIMessage, not a hand-built object
Other Possible Causes
1. Invalid tool schema
If your Zod schema doesn’t match what the model emits, LangGraph can’t parse the arguments.
// BROKEN
schema: z.object({
cityName: z.string(), // model sends `{ city: "Paris" }`
});
// FIXED
schema: z.object({
city: z.string(),
});
This often surfaces as malformed arguments or empty tool inputs.
2. Wrong message type in state
LangGraph expects proper message classes like HumanMessage, AIMessage, and ToolMessage. If you push raw JSON objects into state, downstream nodes may fail.
// BROKEN
return {
messages: [{ role: "assistant", content: "calling tool", tool_calls: [] }],
};
// FIXED
import { AIMessage } from "@langchain/core/messages";
return {
messages: [new AIMessage({ content: "", tool_calls })],
};
3. Missing tool result routing
If your graph sends control back to the agent without adding a ToolMessage, the conversation state becomes inconsistent.
// BROKEN flow:
// agent -> tools -> agent (but tools node never ran or returned nothing)
Make sure your ToolNode is actually connected and returns messages into state.
4. Tool name mismatch
The name in .bindTools() must match the registered tool name exactly.
// BROKEN
const weatherTool = tool(fn, { name: "weather_lookup", ... });
const llm = new ChatOpenAI(...).bindTools([{ name: "getWeather", ... }]); // mismatch
// FIXED
const weatherTool = tool(fn, { name: "getWeather", ... });
const llm = new ChatOpenAI(...).bindTools([weatherTool]);
How to Debug It
- •
Inspect the last assistant message
- •Log
state.messages[state.messages.length - 1]. - •Confirm it is an
AIMessage. - •Check whether
tool_callsexists and has at least one entry.
- •Log
- •
Print the exact tool call payload
- •Look for:
- •missing
id - •wrong
name - •invalid JSON arguments
- •missing
- •Example:
console.log(JSON.stringify(last.tool_calls, null, 2));
- •Look for:
- •
Verify the model is bound to tools
- •If you used OpenAI-compatible chat models, make sure you called
.bindTools(tools). - •Without this, the LLM may answer in plain text instead of emitting structured calls.
- •If you used OpenAI-compatible chat models, make sure you called
- •
Check graph routing
- •Ensure you only route to
"tools"when there are valid calls. - •If you always go to
"tools", you’ll get failures on normal assistant replies. - •Add conditional edges based on message inspection.
- •Ensure you only route to
Prevention
- •
Always use real LangChain message classes:
- •
HumanMessage - •
AIMessage - •
ToolMessage
- •
- •
Keep tool names and schemas stable across:
- •model binding
- •graph registration
- •tests
- •
Add a small integration test that asserts:
- •the agent emits at least one valid
tool_call - •the
ToolNodereturns aToolMessage - •the graph completes without throwing
- •the agent emits at least one valid
If you’re building agents for production systems like banking or insurance workflows, treat this as a contract problem, not an LLM problem. Most “tool calling failure” issues come from broken state shape, bad routing, or mismatched schemas — and those are fixable once you inspect the actual messages flowing through the graph.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit