How to Fix 'tool calling failure' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
tool-calling-failurelanggraphtypescript

What “tool calling failure” means

In LangGraph, tool calling failure usually means the model produced an assistant message that looked like a tool call, but LangGraph could not execute it. This typically happens when the message format is wrong, the tool schema does not match what the model sent, or your graph routing is sending non-tool messages into a tool node.

The error often shows up in agent loops built with StateGraph, ToolNode, and ChatOpenAI.bindTools() when the assistant response contains malformed tool_calls or the graph expects a tool call but gets plain text instead.

The Most Common Cause

The #1 cause is this: you manually append messages or return the wrong message shape, so LangGraph never receives a valid AIMessage with proper tool_calls.

Here’s the broken pattern:

BrokenFixed
Manually constructing messages with plain objectsReturning real LangChain message classes
Passing { role: "assistant", content: ... } into stateUsing AIMessage, HumanMessage, ToolMessage
Forgetting to forward tool_callsLetting the model produce structured tool calls
// BROKEN
import { StateGraph, START, END } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

const tools = [/* your tools */];
const toolNode = new ToolNode(tools);

async function agentNode(state: any) {
  const response = await llm.invoke(state.messages);

  // BAD: converting to a plain object drops tool_calls metadata
  return {
    messages: [
      ...state.messages,
      { role: "assistant", content: response.content }, // no AIMessage
    ],
  };
}
// FIXED
import { AIMessage, HumanMessage } from "@langchain/core/messages";
import { StateGraph, START, END } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools(tools);
const toolNode = new ToolNode(tools);

async function agentNode(state: any) {
  const response = await llm.invoke(state.messages);

  // GOOD: preserve the full AIMessage including tool_calls
  return {
    messages: [...state.messages, response],
  };
}

If you are using a router like:

function shouldContinue(state: any) {
  const last = state.messages[state.messages.length - 1];
  return last.tool_calls?.length ? "tools" : END;
}

then last must be an actual AIMessage. A plain object with content will fail this check and break downstream execution.

Other Possible Causes

1. Tool name mismatch

If the model emits get_balance but your tool is registered as checkBalance, LangGraph cannot resolve it.

// BROKEN
const tools = [
  tool(async () => "100", {
    name: "checkBalance",
    description: "Get balance",
  }),
];

// Model may call "get_balance" if prompted badly or schema differs.

Fix by keeping names exact and stable.

// FIXED
const tools = [
  tool(async () => "100", {
    name: "get_balance",
    description: "Get account balance",
  }),
];

2. Missing .bindTools() on the chat model

If you forget to bind tools, the model may answer in natural language instead of producing structured calls.

// BROKEN
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
// no bindTools(tools)
// FIXED
const llm = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools(tools);

Without binding, you can end up with errors like:

  • No tool calls found in AIMessage
  • Tool calling failure
  • Cannot read properties of undefined (reading 'tool_calls')

3. Invalid tool argument schema

If your Zod schema says a field is required but the model sends something else, execution fails inside the tool node.

const tools = [
  tool(
    async ({ accountId }: { accountId: string }) => {
      return `Balance for ${accountId}`;
    },
    {
      name: "get_balance",
      schema: z.object({
        accountId: z.string(),
      }),
    }
  ),
];

If the model sends:

{ "account_id": "123" }

you’ll get validation trouble. Keep field names consistent and prefer simple schemas.

4. Wrong state shape in your graph

LangGraph expects your state to carry messages in a predictable structure. If you store them under another key or mutate them incorrectly, routing breaks.

// BROKEN
type State = {
  chatHistory: any[];
};
// FIXED
type State = {
  messages: any[];
};

If your node returns { message: ... } instead of { messages: ... }, your graph may silently lose history and then fail when it reaches the tool node.

How to Debug It

  1. Print the last message before routing

    • Check whether it is an actual AIMessage.
    • Verify it has a non-empty tool_calls array.
    console.log(JSON.stringify(state.messages.at(-1), null, 2));
    
  2. Inspect the raw LLM output

    • Log what comes back from .invoke().
    • If you only see text content and no structured tool calls, your model is not bound correctly.
    const response = await llm.invoke(state.messages);
    console.log(response);
    
  3. Verify tool registration

    • Make sure every tool passed to ToolNode has the same name used by .bindTools().
    • Check for duplicate names.
  4. Validate your graph edges

    • Ensure your conditional edge routes to "tools" only when there are valid calls.
    • If you route every assistant message into tools, you’ll hit this error immediately.

Prevention

  • Always use LangChain message classes:

    • HumanMessage
    • AIMessage
    • ToolMessage
  • Bind tools on the model before invoking:

    const llm = new ChatOpenAI({ model }).bindTools(tools);
    
  • Keep one canonical state shape for all nodes:

    type State = { messages: BaseMessage[] };
    
  • Add a small assertion before executing tools:

    const last = state.messages.at(-1);
    if (!last || last.getType() !== "ai" || !last.tool_calls?.length) {
      throw new Error("Expected AIMessage with tool_calls");
    }
    

If you see tool calling failure in LangGraph TypeScript, start by checking message types and .bindTools(). In most real projects, one of those two is broken before anything else.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides