How to Fix 'tool calling failure in production' in LangChain (TypeScript)
What this error means
tool calling failure in production usually means your model returned something LangChain could not parse into a valid tool call, or the tool execution path broke after the model chose a tool. In TypeScript, this shows up most often when using bindTools(), toolCalling, or agent executors with OpenAI-compatible chat models.
The failure is usually not “LangChain is broken.” It’s one of these: the model didn’t support tool calls, the tool schema was invalid, the prompt pushed the model into free-text output, or your runtime rejected the tool arguments.
The Most Common Cause
The #1 cause is using a model that does not reliably support structured tool calling, or binding tools incorrectly and then expecting AIMessage.tool_calls to exist.
Here’s the wrong pattern I see most often:
| Broken | Fixed |
|---|---|
| Model returns plain text | Model returns structured tool calls |
| Tool is defined, but never bound correctly | Tool is bound with bindTools() |
| Agent expects tools, but model can’t emit them | Use a tool-capable chat model |
// ❌ Broken
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getBalance = tool(
async ({ accountId }) => {
return `Balance for ${accountId} is $1200`;
},
{
name: "get_balance",
description: "Get customer balance",
schema: z.object({
accountId: z.string(),
}),
}
);
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo", // often wrong for strict tool calling
});
const result = await llm.invoke([
{
role: "user",
content: "Check balance for account 123",
},
]);
console.log(result.tool_calls); // undefined
// ✅ Fixed
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getBalance = tool(
async ({ accountId }) => {
return `Balance for ${accountId} is $1200`;
},
{
name: "get_balance",
description: "Get customer balance",
schema: z.object({
accountId: z.string(),
}),
}
);
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const llmWithTools = llm.bindTools([getBalance]);
const result = await llmWithTools.invoke([
new HumanMessage("Check balance for account 123"),
]);
console.log(result.tool_calls); // [{ name: 'get_balance', args: ... }]
If you are using an agent, make sure the agent type matches the model capability. A common runtime error looks like:
- •
Error: Model does not support function calling - •
TypeError: Cannot read properties of undefined (reading 'tool_calls') - •
BadRequestError: Invalid value for 'tools'
Other Possible Causes
1) Your Zod schema does not match what the model sends
If your schema is too strict, LangChain will reject arguments that are close but not exact.
// ❌ Broken
schema: z.object({
amount: z.number(),
});
// Model sends:
// { amount: "100" }
// ✅ Fixed
schema: z.object({
amount: z.coerce.number(),
});
This matters a lot in production because models frequently emit strings for numeric fields.
2) You forgot to pass tools into the agent/runtime
Some setups define tools but never attach them to the runnable chain or executor.
// ❌ Broken
const chain = prompt.pipe(llm);
await chain.invoke({ input: "Refund order #123" });
// ✅ Fixed
const chain = prompt.pipe(llm.bindTools([refundOrder]));
await chain.invoke({ input: "Refund order #123" });
If you’re using createOpenAIToolsAgent, verify both the prompt and tools are wired in:
const agent = await createOpenAIToolsAgent({
llm,
tools: [refundOrder],
prompt,
});
3) Your tool name or description is ambiguous
Models pick tools based on names and descriptions. If two tools overlap heavily, you get wrong routing or no call at all.
// ❌ Weak descriptions
name: "handle"
description: "Do stuff"
// ✅ Specific descriptions
name: "lookup_policy_status"
description: "Fetch policy status by policy number. Use only when the user asks about policy state."
Keep names stable and descriptions precise. In regulated domains, vague tools cause bad routing fast.
4) Your provider adapter does not fully support LangChain’s tool format
This happens with some OpenAI-compatible endpoints, local models, or older gateway versions.
Example config issue:
const llm = new ChatOpenAI({
modelName: "llama3.1",
configuration: {
baseURL: process.env.OPENAI_COMPAT_URL,
},
});
If that endpoint doesn’t implement OpenAI-style tools and tool_choice, LangChain will fail when it tries to send them.
Typical symptoms:
- •
400 Bad Request - •
unsupported parameter tools - •
tool calling failure in production - •empty assistant response with no
tool_calls
How to Debug It
- •
Log the raw AI message before execution Check whether you got a normal text response or a structured tool call.
const msg = await llmWithTools.invoke([new HumanMessage("...")]); console.log(JSON.stringify(msg, null, 2)); - •
Verify the exact error class Look for:
- •
ToolInputParsingException - •
BadRequestError - •
OutputParserException - •provider-specific HTTP errors
The class tells you whether this is schema validation, provider rejection, or agent wiring.
- •
- •
Reduce to one tool and one prompt Remove extra tools and long system prompts. If it works with one simple tool, your issue is likely routing ambiguity or prompt interference.
- •
Test schema coercion Temporarily relax strict fields:
schema: z.object({ accountId: z.coerce.string(), amount: z.coerce.number().optional(), });If that fixes it, your production inputs are drifting from your expected shape.
Prevention
- •Use models that explicitly support tool calling in your provider and version.
- •Keep Zod schemas tolerant where appropriate:
- •use
z.coerce.number() - •use
.optional()for fields models may omit
- •use
- •Add an integration test that asserts:
- •
tool_calls.length > 0 - •the parsed args validate against your schema
- •
- •Log provider responses in staging so you can catch:
- •malformed arguments
- •unsupported parameters
- •missing
tool_callsbefore production traffic sees them
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit