How to Fix 'duplicate tool calls' in LangChain (TypeScript)
If you’re seeing duplicate tool calls in LangChain TypeScript, the model is trying to emit the same tool invocation more than once in a single run. In practice, this usually shows up when you wire the agent loop incorrectly, reuse message history badly, or let the model retry without clearing prior tool-call state.
The error often appears with OpenAI-compatible chat models and LangChain agent executors, especially when you’re using AIMessage, ToolMessage, bindTools(), or a custom loop around Runnable/AgentExecutor.
The Most Common Cause
The #1 cause is feeding the model’s previous assistant message back into the next turn without also sending the matching ToolMessage, or manually appending tool-call messages twice.
LangChain and OpenAI tool calling expect a strict sequence:
- •assistant emits a tool call
- •your app runs the tool
- •you send back exactly one
ToolMessagefor that call - •then the model continues
If you resend the original AIMessage with its tool_calls field intact, many providers will reject it with something like:
- •
400 Bad Request: duplicate tool calls - •
Invalid request: duplicate tool call IDs - •LangChain wrapping provider errors through
BadRequestError
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Reuses assistant message with tool calls | Replaces it with proper tool result flow |
| Appends AI message twice | Appends exactly one ToolMessage per call |
| Skips matching tool response | Sends ToolMessage before next model call |
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, ToolMessage } from "@langchain/core/messages";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const getWeather = new DynamicStructuredTool({
name: "get_weather",
description: "Get weather for a city",
schema: z.object({ city: z.string() }),
func: async ({ city }) => `Weather in ${city}: sunny`,
});
// ❌ Broken: reusing AIMessage with tool_calls in history
const badHistory = [
new AIMessage({
content: "",
tool_calls: [{ id: "call_1", name: "get_weather", args: { city: "London" } }],
}),
// Tool result missing here
new AIMessage({
content: "",
tool_calls: [{ id: "call_1", name: "get_weather", args: { city: "London" } }],
}),
];
await llm.bindTools([getWeather]).invoke(badHistory);
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, ToolMessage } from "@langchain/core/messages";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const getWeather = new DynamicStructuredTool({
name: "get_weather",
description: "Get weather for a city",
schema: z.object({ city: z.string() }),
func: async ({ city }) => `Weather in ${city}: sunny`,
});
const first = await llm.bindTools([getWeather]).invoke("What's the weather in London?");
// ✅ Correct: respond to each tool call once
const messages = [
first,
new ToolMessage({
tool_call_id: first.tool_calls?.[0].id ?? "",
content: await getWeather.invoke({ city: "London" }),
}),
];
await llm.bindTools([getWeather]).invoke(messages);
The important part is this:
- •do not duplicate the same
AIMessage.tool_calls - •do not skip the
ToolMessage - •do not invent a second assistant message for the same call
Other Possible Causes
1. You are calling the agent twice in parallel
If two requests share the same conversation state, both can emit identical tool calls.
// ❌ Two concurrent invocations on shared state
await Promise.all([
agentExecutor.invoke({ input, chat_history }),
agentExecutor.invoke({ input, chat_history }),
]);
Fix by serializing per session:
// ✅ One request at a time per conversation/session
await agentExecutor.invoke({ input, chat_history });
2. You are mixing manual tool handling with AgentExecutor
If you use AgentExecutor, don’t also manually process intermediate steps unless you know exactly what you’re doing.
// ❌ Manual + automatic orchestration together
const result = await agentExecutor.invoke({ input });
for (const step of result.intermediateSteps) {
// also calling tools again here causes duplicates
}
Use one orchestration path only:
// ✅ Let AgentExecutor handle the loop
const result = await agentExecutor.invoke({ input });
3. Your memory stores raw AI messages with tool calls
Some memory implementations persist full messages including internal tool metadata. On replay, that metadata gets sent again.
// ❌ Persisting raw AI messages can replay old tool_calls
memory.chatHistory.push(aiMessage);
Store only clean conversational turns if possible:
// ✅ Keep normalized history or strip internal fields before persisting
memory.chatHistory.push({
role: "assistant",
content: aiMessage.content,
});
4. You are retrying after a partial failure without resetting state
A failed run can leave your local state thinking a tool was already requested. Retrying then reuses stale IDs.
try {
await chain.invoke(input);
} catch (e) {
// ❌ retrying with same mutated message array/state
await chain.invoke(input);
}
Reset state before retrying:
try {
await chain.invoke(input);
} catch (e) {
resetConversationState();
await chain.invoke(input);
}
How to Debug It
- •
Log every message going into the model
- •Print roles, content, and especially
tool_calls. - •If you see the same
tool_call_idtwice, that’s your bug.
- •Print roles, content, and especially
- •
Check whether you’re using one orchestration layer or two
- •If you have both custom loops and
AgentExecutor, remove one. - •In LangChain TypeScript, double orchestration is a common source of duplicate calls.
- •If you have both custom loops and
- •
Inspect your memory/session store
- •Look for persisted raw
AIMessageobjects. - •If old assistant messages contain
tool_calls, strip them before replay.
- •Look for persisted raw
- •
Verify each tool call has exactly one matching ToolMessage
- •One call ID → one response.
- •Missing responses and duplicated responses both break provider validation.
Prevention
- •Keep a single source of truth for conversation state.
- •Use either:
- •manual tool execution with explicit message management, or
- •LangChain’s agent loop — not both.
- •Add logging around:
- •outbound messages,
- •generated
tool_calls, - •returned
ToolMessages.
- •In production, validate message arrays before every invoke:
- •no duplicate assistant messages with identical
tool_call_id - •no orphaned tool calls without responses
- •no duplicate assistant messages with identical
If you’re building agents for regulated environments like banking or insurance, this kind of bug matters because it creates flaky behavior under load. Treat conversation state as an append-only event log, and make sure each tool call is acknowledged once and only once.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit