How to Fix 'tool calling failure during development' in LangChain (Python)
Tool calling failures in LangChain usually mean the model tried to emit a structured tool call, but the agent/runtime could not validate, route, or execute it. You’ll hit this during development when wiring ChatOpenAI, tools, and an agent executor together, especially after upgrading LangChain or switching models.
The error often shows up as one of these:
- •
tool calling failure during development - •
InvalidToolCall - •
Failed to parse tool call - •
AIMessagecontains malformedtool_calls - •
ValueError: This model does not support tool calling
The Most Common Cause
The #1 cause is using a model that does not actually support native tool calling, or binding tools incorrectly so LangChain expects tool calls that never arrive in the right format.
A common broken pattern is calling an agent with a plain chat model and assuming it can invoke tools automatically.
| Broken | Fixed |
|---|---|
| Uses a model without proper tool binding | Uses bind_tools() with a tool-capable chat model |
| Agent expects structured tool calls | Model emits valid tool_calls |
| Fails with parse/validation errors | Tool execution works end-to-end |
# BROKEN
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
@tool
def get_balance(account_id: str) -> str:
return f"Balance for {account_id}: $1,250"
llm = ChatOpenAI(model="gpt-3.5-turbo") # often the wrong choice for native tool calling
tools = [get_balance]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a banking assistant."),
("human", "{input}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
print(executor.invoke({"input": "Check balance for account 123"}))
# FIXED
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
@tool
def get_balance(account_id: str) -> str:
return f"Balance for {account_id}: $1,250"
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [get_balance]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a banking assistant."),
("human", "{input}"),
])
# Explicitly bind tools so LangChain gets structured tool calls
llm_with_tools = llm.bind_tools(tools)
agent = create_tool_calling_agent(llm_with_tools, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
print(executor.invoke({"input": "Check balance for account 123"}))
If you see ValueError: This model does not support tool calling, that’s your clue. Use a chat model/provider that supports native tools and bind them explicitly.
Other Possible Causes
1. Tool schema is invalid or too loose
If your tool signature cannot be serialized cleanly, LangChain may fail when converting it to JSON schema.
# BAD: unsupported type / ambiguous schema
@tool
def lookup_policy(policy_id: int | None) -> dict:
return {"policy_id": policy_id}
Fix it by making the schema explicit and simple.
from pydantic import BaseModel, Field
class PolicyInput(BaseModel):
policy_id: str = Field(..., description="Policy number")
@tool(args_schema=PolicyInput)
def lookup_policy(policy_id: str) -> dict:
return {"policy_id": policy_id}
2. You passed the wrong message format into the agent
Tool-calling agents expect messages in the format they were built for. Mixing raw strings, legacy memory objects, or malformed message lists can trigger parse failures.
# BAD
executor.invoke("Check claim status for CLM-1001")
Use the expected input key instead.
# GOOD
executor.invoke({"input": "Check claim status for CLM-1001"})
If you are using lower-level chat model calls, make sure messages are proper LangChain message objects like HumanMessage, SystemMessage, and AIMessage.
3. Your prompt does not tell the model when to use tools
Some models need clearer instruction to use tools instead of answering directly. Without this, you may get plain text where LangChain expects a structured call.
prompt = ChatPromptTemplate.from_messages([
("system", "Answer questions."),
("human", "{input}")
])
Tighten it up:
prompt = ChatPromptTemplate.from_messages([
("system", "Use available tools when needed. Never invent account data."),
("human", "{input}")
])
This matters more in regulated workflows where hallucinated answers are unacceptable.
4. Version mismatch between LangChain packages
A common dev-time failure is mixing incompatible versions of:
- •
langchain - •
langchain-core - •
langchain-openai - •provider SDKs
That can surface as odd runtime errors around tool_calls, especially after a partial upgrade.
Check your installed versions:
pip show langchain langchain-core langchain-openai openai
Then align them using compatible releases from the same timeframe.
How to Debug It
- •
Print the raw model response
- •Inspect whether the assistant returned text or structured tool calls.
- •Look for
AIMessage.tool_callsor malformed content. - •If you only see plain text like
"I will check that now"then the model never emitted a valid call.
- •
Verify your model supports tools
- •Confirm the provider/model pair supports native function/tool calling.
- •If you’re using OpenAI via LangChain, prefer models like
gpt-4o,gpt-4o-mini, or another supported chat model. - •If you see
ValueError: This model does not support tool calling, stop here and swap models.
- •
Test the tool in isolation
- •Call the function directly before involving LangChain.
- •Confirm inputs/outputs are serializable and deterministic.
- •Example:
print(get_balance.invoke({"account_id": "123"}))
- •
Turn on verbose tracing
- •Use
verbose=Trueon the executor. - •Add LangSmith tracing if available.
- •Watch where it fails:
- •prompt formatting
- •LLM response parsing
- •tool dispatch/execution
- •Use
Prevention
- •
Use explicit schemas for every production tool.
- •Prefer Pydantic models over loose Python signatures when inputs matter.
- •Keep fields strings/numbers/bools unless you have a strong reason not to.
- •
Bind tools directly to the chat model before building the agent.
- •Don’t assume an agent wrapper will fix unsupported models.
- •Treat
.bind_tools()as required setup, not optional sugar.
- •
Pin package versions together.
- •Upgrade
langchain, provider integrations, and SDKs as a set. - •Add a small smoke test that asserts an agent can call one real tool before merging changes.
- •Upgrade
If you’re debugging this in a bank or insurance codebase, start with the model/tool binding first. In practice, that’s where most “tool calling failure during development” incidents come from.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit