How to Fix 'tool calling failure' in LangGraph (Python)
What the error means
tool calling failure in LangGraph usually means the model produced a tool call that your graph could not execute or route correctly. In practice, it shows up when the assistant node emits tool_calls, but the next node, tool binding, message format, or state wiring is wrong.
You’ll typically hit this right after adding tools to an agent graph, especially when moving from a plain LangChain chat model to a StateGraph with ToolNode.
The Most Common Cause
The #1 cause is simple: your LLM is not actually bound to the tools, or your graph is missing the ToolNode that handles tool execution.
When this happens, you often see errors like:
- •
ValueError: No tool calls found in AIMessage - •
KeyError: 'tool_calls' - •
langgraph.errors.InvalidUpdateError - •
tool calling failure
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
Model generates a tool call, but is not bound with .bind_tools(...) | Model is bound to tools before being used in the graph |
| Graph routes directly from assistant back to assistant | Graph routes tool calls into ToolNode first |
| Tool messages are never appended back into state | Tool outputs are returned as ToolMessage objects |
# BROKEN
from langgraph.graph import StateGraph, MessagesState
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
def assistant(state: MessagesState):
# llm is NOT bound to tools
return {"messages": [llm.invoke(state["messages"])]}
graph = StateGraph(MessagesState)
graph.add_node("assistant", assistant)
# No ToolNode registered
graph.set_entry_point("assistant")
graph.set_finish_point("assistant")
app = graph.compile()
# FIXED
from langgraph.graph import StateGraph, MessagesState, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def get_balance(account_id: str) -> str:
return f"Balance for {account_id}: $1250"
tools = [get_balance]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
def assistant(state: MessagesState):
response = llm.invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: MessagesState):
last_msg = state["messages"][-1]
return "tools" if getattr(last_msg, "tool_calls", None) else END
graph = StateGraph(MessagesState)
graph.add_node("assistant", assistant)
graph.add_node("tools", ToolNode(tools))
graph.set_entry_point("assistant")
graph.add_conditional_edges("assistant", should_continue, {"tools": "tools", END: END})
graph.add_edge("tools", "assistant")
app = graph.compile()
If you only fix one thing, fix this first. Most “tool calling failure” issues are just broken tool routing.
Other Possible Causes
1. Your tool schema does not match what the model emitted
If your function signature is ambiguous or invalid for Pydantic conversion, the model can emit arguments that fail validation.
# BAD: unsupported or unclear schema
@tool
def lookup_policy(data):
return "ok"
# GOOD: explicit types and names
@tool
def lookup_policy(policy_id: str) -> str:
return f"policy={policy_id}"
If the model sends {} or malformed args, LangGraph may fail during tool execution with validation errors.
2. You are returning raw strings instead of message objects
LangGraph expects message state to contain proper BaseMessage instances like AIMessage and ToolMessage. Returning plain strings breaks downstream routing.
# BAD
def assistant(state):
return {"messages": ["calling tool now"]}
# GOOD
def assistant(state):
response = llm.invoke(state["messages"])
return {"messages": [response]}
If you use custom reducers or custom state keys, make sure they preserve message objects.
3. Your conditional edge checks the wrong field
A common bug is checking .content instead of .tool_calls.
# BAD
def route(state):
last_msg = state["messages"][-1]
if "search" in last_msg.content:
return "tools"
return END
# GOOD
def route(state):
last_msg = state["messages"][-1]
if getattr(last_msg, "tool_calls", None):
return "tools"
return END
In LangGraph agents, tool routing should be based on actual structured tool calls, not string matching.
4. Your tool name does not match between binding and execution
This happens when you rename a function after binding it, or manually construct messages with a mismatched tool name.
# BAD: mismatch between emitted name and registered name
@tool(name="lookup_account")
def get_account(account_id: str) -> str:
return account_id
# GOOD: keep names consistent
@tool(name="lookup_account")
def lookup_account(account_id: str) -> str:
return account_id
If the model emits a tool call for "lookup_account" but your graph only knows "get_account", execution fails.
How to Debug It
- •
Print the last AI message
- •Check whether it contains real
tool_calls. - •Example:
last_msg = state["messages"][-1] print(type(last_msg), last_msg) print(getattr(last_msg, "tool_calls", None))
- •Check whether it contains real
- •
Verify tools are bound
- •Confirm you used
.bind_tools(tools)on the model instance actually called inside the node. - •Don’t bind one instance and invoke another.
- •Confirm you used
- •
Inspect graph routing
- •Make sure assistant → tools → assistant exists.
- •If you skip
ToolNode, LangGraph will not execute tools for you.
- •
Run one step at a time
- •Start with a single prompt that should trigger one obvious tool call.
- •If needed, stream events:
for event in app.stream({"messages": [("user", "What is my balance?")] }): print(event)
If you see an AI message with a tool call but no subsequent ToolMessage, your problem is routing or execution. If you never see tool_calls, your problem is binding or prompting.
Prevention
- •Bind tools at the exact model instance used in the node.
- •Use typed tool signatures with clear parameter names.
- •Route based on
message.tool_calls, not string content. - •Always include a
ToolNodewhen using LangGraph prebuilt tool workflows. - •Add a small integration test that asserts an AI message with
tool_callsleads to aToolMessage.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit