How to Fix 'duplicate tool calls during development' in LangGraph (Python)
When LangGraph throws duplicate tool calls during development, it usually means the same assistant/tool request got executed more than once in a single conversation turn. In practice, this shows up when you re-run a graph node, replay state incorrectly, or let both your app code and LangGraph try to execute the same tool call.
This is common during local development because hot reload, manual retries, and state mutation make duplicate execution easy to trigger. The fix is usually not in the tool itself; it’s in how you manage messages, ToolNode, and graph state transitions.
The Most Common Cause
The #1 cause is appending the model response twice or reusing the same AIMessage with tool_calls across multiple graph invocations.
LangGraph expects tool calls to be consumed once by a ToolNode. If you keep passing the same assistant message back into the graph, you can get errors like:
- •
ValueError: Duplicate tool calls detected - •
InvalidUpdateError: Expected one tool call execution per AIMessage - •repeated tool execution from the same
AIMessage.tool_calls
Broken pattern vs fixed pattern
| Broken | Fixed |
|---|---|
| Reuses the same messages list and appends the AI response manually | Lets LangGraph own message updates |
Calls tools outside the graph and again inside ToolNode | Uses one execution path only |
| Replays stale state on every request | Passes fresh input per turn |
# WRONG
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_core.messages import HumanMessage
def call_model(state):
response = llm.invoke(state["messages"])
# BAD: manually mutating messages can cause duplicates on rerun
state["messages"].append(response)
return state
def run_tool(state):
# ToolNode will also execute tool_calls from the same AIMessage
return {"messages": tool_node.invoke(state["messages"])}
builder = StateGraph(dict)
builder.add_node("model", call_model)
builder.add_node("tools", run_tool)
builder.set_entry_point("model")
builder.add_edge("model", "tools")
builder.add_edge("tools", END)
graph = builder.compile()
state = {"messages": [HumanMessage(content="Get account balance")]}
graph.invoke(state)
# RIGHT
from typing import TypedDict, Annotated
from langchain_core.messages import BaseMessage
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
def call_model(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}
tool_node = ToolNode(tools)
builder = StateGraph(State)
builder.add_node("model", call_model)
builder.add_node("tools", tool_node)
builder.set_entry_point("model")
builder.add_conditional_edges("model", should_call_tools) # route only when needed
builder.add_edge("tools", "model")
graph = builder.compile()
The important part is this: return only new messages from each node, and let add_messages merge them. Don’t mutate the existing list in place.
Other Possible Causes
1) Your UI or API retried the request
If your frontend sends the same payload twice, LangGraph will happily process it twice. If that payload contains an unresolved assistant message with tool_calls, you’ll see duplicate execution.
# Example: accidental double submit
if request.method == "POST":
result1 = graph.invoke(payload)
result2 = graph.invoke(payload) # duplicate run
Fix it by making requests idempotent:
request_id = payload["request_id"]
if already_processed(request_id):
return cached_result(request_id)
result = graph.invoke(payload)
store_result(request_id, result)
2) You are executing tools both manually and with ToolNode
This happens when people call a Python function directly after seeing tool_calls, then also route through LangGraph’s built-in tool executor.
# WRONG
ai_msg = llm.invoke(messages)
for tc in ai_msg.tool_calls:
result = my_tool(**tc["args"]) # manual execution
# later...
tool_results = tool_node.invoke([ai_msg]) # executes again
Use one path only. If you use LangGraph, prefer ToolNode for consistency and traceability.
3) Your conditional routing sends the graph back to tools twice
A bad router can loop into the tools node even after all tool calls are already resolved.
def should_call_tools(state):
last_msg = state["messages"][-1]
if getattr(last_msg, "tool_calls", None):
return "tools"
return "model"
That looks fine until your model keeps emitting stale tool calls because you’re not clearing old messages or your state reducer is wrong. Make sure routing depends on the latest assistant message only, and that completed tool results move the conversation forward.
4) Hot reload is reusing old in-memory state
During development with Uvicorn reload or notebook cells, your app may preserve a stale messages object across runs. That gives you repeated AIMessage.tool_calls without realizing it.
# BAD: module-level mutable state
conversation_state = {"messages": []}
Move per-request conversation state into storage keyed by session/user/request ID. Don’t keep mutable LangGraph state at module scope unless you really mean to share it.
How to Debug It
- •
Print the last message before every node
- •Check whether the same
AIMessageappears twice. - •Look specifically for repeated
tool_calls.
- •Check whether the same
- •
Log node entry/exit
- •Add logs around your model node and
ToolNode. - •If you see model -> tools -> tools again without a new model output, your router is wrong.
- •Add logs around your model node and
- •
Inspect message identity
- •Compare object IDs or timestamps.
- •If the exact same message object is being reused across invocations, that’s your bug.
- •
Temporarily disable hot reload and retries
- •Run once without auto-reload.
- •If the error disappears, you’re dealing with duplicate submission or stale in-memory state.
Prevention
- •Use
Annotated[list[BaseMessage], add_messages]for message merging. - •Return only new messages from nodes; never mutate shared lists in place.
- •Make request handling idempotent if your app can retry submissions.
- •Keep one source of truth for tool execution: either LangGraph’s
ToolNodeor your own executor, not both.
If you want a quick rule of thumb: whenever you see duplicate tool calls, look for duplicated state updates first. In LangGraph, this error is almost always about message lifecycle, not about the actual tool implementation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit