How to Fix 'duplicate tool calls' in LangGraph (Python)
What the error means
duplicate tool calls in LangGraph usually means the model produced the same tool invocation more than once in a single agent turn, and your graph tried to execute it twice. In practice, this shows up when you have an AIMessage with repeated tool_calls, or when your graph state gets replayed and the same assistant message is processed again.
You’ll usually hit this after adding tools to a StateGraph, especially when using ChatOpenAI.bind_tools(...), custom reducers, retries, or manual message handling.
The Most Common Cause
The #1 cause is re-processing the same assistant message without appending tool results correctly.
A common broken pattern is: call the model, inspect tool_calls, execute tools manually, then invoke the model again with the same messages list. That can cause LangGraph to see the same AIMessage.tool_calls twice.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
Reuses the same messages list and replays an AIMessage with tool calls | Appends ToolMessage results back into state before the next model step |
| Manually loops without clearing/advancing state | Lets LangGraph route from assistant → tools → assistant |
Can trigger errors like ValueError: duplicate tool calls detected or repeated tool execution | Each tool call is consumed exactly once |
# BROKEN: replays the same AIMessage with tool_calls
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState
from langchain_core.tools import tool
@tool
def get_balance(account_id: str) -> str:
return f"Balance for {account_id}: $100"
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools([get_balance])
def assistant_node(state: MessagesState):
# This returns an AIMessage that may contain tool_calls
return {"messages": [llm.invoke(state["messages"])]}
# Somewhere else:
messages = [{"role": "user", "content": "Check account 123"}]
ai_msg = llm.invoke(messages)
# BAD: invoking again with the same assistant message still in play
messages.append(ai_msg)
ai_msg_2 = llm.invoke(messages) # can lead to duplicate tool call handling
# FIXED: let LangGraph manage assistant -> tools -> assistant flow
from typing import TypedDict, Annotated
from langchain_core.messages import AnyMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
def assistant_node(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph = StateGraph(State)
graph.add_node("assistant", assistant_node)
graph.add_node("tools", ToolNode([get_balance]))
graph.add_edge(START, "assistant")
graph.add_conditional_edges("assistant", tools_condition)
graph.add_edge("tools", "assistant")
app = graph.compile()
result = app.invoke({"messages": [{"role": "user", "content": "Check account 123"}]})
The important part is that ToolNode consumes the model’s tool calls and writes back ToolMessages. That advances state cleanly instead of replaying the same assistant output.
Other Possible Causes
1) You are appending messages incorrectly
If you overwrite state instead of using LangGraph’s message reducer, you can accidentally keep old AIMessage.tool_calls around.
# BAD: replacing messages loses history semantics
state["messages"] = [llm.invoke(state["messages"])]
# GOOD: append via reducer / returned update
return {"messages": [llm.invoke(state["messages"])]}
If you use a custom state schema, make sure your messages field uses:
messages: Annotated[list[AnyMessage], add_messages]
2) Your retry logic re-invokes a node that already emitted tool calls
This happens a lot with manual retries around network failures or rate limits.
# BAD: retrying assistant node without advancing state
for _ in range(2):
ai_msg = llm.invoke(state["messages"])
if ai_msg.tool_calls:
break
If the first attempt already produced tool calls and you retry with the same input, you may get duplicates. Retry at the request boundary, not by replaying partially processed graph state.
3) Your conditional routing sends control back to the assistant too early
If you skip the tools node and route straight back to the LLM, it may emit the same call again because nothing has been executed yet.
# BAD routing idea:
graph.add_conditional_edges("assistant", lambda s: "assistant")
Use tools_condition or equivalent logic that sends execution to ToolNode first:
graph.add_conditional_edges("assistant", tools_condition)
graph.add_edge("tools", "assistant")
4) The model is generating repeated tool calls in one response
Some models will emit multiple identical calls if your prompt is ambiguous or if tool descriptions overlap.
@tool(description="Get customer profile by id")
def get_profile(customer_id: str) -> str:
...
@tool(description="Lookup customer info by id")
def lookup_customer(customer_id: str) -> str:
...
If both tools do nearly the same thing, the model may call both. Consolidate tools or tighten descriptions so each one has a clear job.
How to Debug It
- •
Print every message entering and leaving each node
- •Look for repeated
AIMessageobjects with identicaltool_calls. - •Inspect
message.id,message.content, andmessage.tool_calls.
- •Look for repeated
- •
Log raw tool call payloads
- •You want to see whether duplication happens before LangGraph execution or inside your graph.
- •Example:
for m in state["messages"]: print(type(m).__name__, getattr(m, "tool_calls", None))
- •
Check your state reducer
- •If your messages field does not use
add_messages, you are probably overwriting history. - •Confirm your schema matches LangGraph examples exactly.
- •If your messages field does not use
- •
Temporarily remove retries and custom routing
- •Run only:
- •user input → assistant → tools → assistant
- •If the error disappears, your retry loop or conditional edge is duplicating execution.
- •Run only:
Prevention
- •Use LangGraph’s standard pattern:
- •
assistantnode returns an AI message - •
ToolNodeexecutes tools - •conditional edge routes back to
assistant
- •
- •Keep one source of truth for conversation state.
- •Don’t manually rebuild message lists unless you really need to.
- •Make tools distinct and narrowly scoped.
- •Ambiguous tools increase repeated or overlapping calls from the model.
If you want a quick checklist: confirm add_messages, use ToolNode, avoid replaying old AIMessage.tool_calls, and don’t retry inside a partially completed graph turn. That fixes most cases of duplicate tool calls in LangGraph Python.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit