How to Fix 'duplicate tool calls during development' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
duplicate-tool-calls-during-developmentlanggraphtypescript

If you’re seeing duplicate tool calls during development in LangGraph TypeScript, it usually means the same assistant/tool request is being executed more than once. In practice, this shows up when you re-run a graph node, retry a request, or let the model emit tool calls while your app also manually invokes the tool.

The error often appears during local development because hot reload, React re-renders, or retry logic can make a single user action look like multiple executions.

The Most Common Cause

The #1 cause is double execution of the same graph step. In LangGraph, this usually happens when you call graph.invoke() more than once for the same input, or when a node both returns tool calls and also directly executes them.

Here’s the broken pattern:

BrokenFixed
```ts
import { StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";

const graph = new StateGraph({/* ... */}).compile();

async function handleRequest(input: string) { const result = await graph.invoke({ messages: [input] }); const secondResult = await graph.invoke({ messages: [input] }); // duplicate execution return secondResult; } |ts import { StateGraph } from "@langchain/langgraph";

const graph = new StateGraph({/* ... */}).compile();

async function handleRequest(input: string) { const result = await graph.invoke( { messages: [input] }, { configurable: { thread_id: "req-123" } } );

return result; }


A more realistic bug is mixing model tool-calling with manual tool execution:

| Broken | Fixed |
|---|---|
| ```ts
const llmWithTools = llm.bindTools([getCustomerBalance]);

const node = async (state: any) => {
  const aiMsg = await llmWithTools.invoke(state.messages);

  // Wrong: tool already requested by the model
  if (aiMsg.tool_calls?.length) {
    const toolResult = await getCustomerBalance.invoke({
      accountId: "123"
    });
    return { messages: [...state.messages, aiMsg, toolResult] };
  }

  return { messages: [...state.messages, aiMsg] };
};
``` | ```ts
const llmWithTools = llm.bindTools([getCustomerBalance]);

const node = async (state: any) => {
  const aiMsg = await llmWithTools.invoke(state.messages);

  // Right: let LangGraph route to the tool node
  return { messages: [...state.messages, aiMsg] };
};
``` |

If you’re using LangGraph’s built-in tool routing, the model should emit `AIMessage.tool_calls`, and a separate tool node should execute them exactly once. Don’t manually call the same tool in the LLM node unless you fully control deduplication.

## Other Possible Causes

### 1. React Strict Mode running effects twice

In development, React Strict Mode intentionally double-invokes some lifecycle paths. If your graph call sits inside `useEffect`, it may run twice.

```tsx
useEffect(() => {
  graph.invoke({ messages });
}, []);

Fix it by guarding with a ref or moving execution behind an explicit user action.

const ran = useRef(false);

useEffect(() => {
  if (ran.current) return;
  ran.current = true;
  graph.invoke({ messages });
}, []);

2. Retry middleware resubmitting the same message

If you wrap the model or graph with retries, a failed network call can replay the same assistant turn and its tool calls.

const result = await retry(async () => {
  return graph.invoke(input);
});

Use idempotent state updates and make sure retries don’t re-execute side-effectful tools like createTicket or chargeCard.

3. Streaming handler applied twice

A common TypeScript mistake is subscribing to both stream() and invoke() for the same request.

await graph.stream(input);
await graph.invoke(input); // duplicate path

Pick one execution path per request. If you need streaming UI updates, use only stream() and build state from chunks.

4. Tool node not checking for already-processed calls

If your custom tool executor doesn’t dedupe by tool_call.id, the same call can be processed again after a resume or partial failure.

for (const call of aiMsg.tool_calls ?? []) {
  await tools[call.name].invoke(call.args);
}

Add an idempotency check keyed by call.id or your own request-scoped identifier.

How to Debug It

  1. Log every graph entry point

    • Add logs around invoke, stream, and any effect/hook that triggers them.
    • You want to prove whether the same input is entering twice.
  2. Inspect the assistant message

    • Check whether you have an AIMessage with repeated tool_calls.
    • In LangChain/LangGraph TypeScript this usually looks like:
      console.log(aiMsg.tool_calls);
      
    • If the same tool_call.id appears twice, your routing is wrong.
  3. Trace your node boundaries

    • Separate LLM generation from tool execution.
    • If one node both creates and executes tools, split it into two nodes:
      • model node
      • tool node
  4. Disable dev-only duplication sources

    • Turn off React Strict Mode temporarily.
    • Remove retries.
    • Stop hot reload from re-triggering startup code.
    • Re-test with one plain graph.invoke() call.

Prevention

  • Keep tool execution idempotent.
    • Store processed tool_call.id values in state or persistence.
  • Use one execution path per request.
    • Don’t mix invoke() and stream() for the same turn.
  • Separate concerns in your graph.
    • Let the model emit AIMessage.tool_calls.
    • Let a dedicated tool node execute them once.

If you want a quick rule of thumb: the model decides what to call; your graph decides when to execute it; your app should never do both twice. That’s usually where this LangGraph TypeScript error comes from.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides