How to Fix 'agent infinite loop when scaling' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
agent-infinite-loop-when-scalinglangchaintypescript

When LangChain agents start looping forever under load, it usually means the agent keeps re-entering its own decision cycle instead of reaching a terminal state. In TypeScript apps, this often shows up after you add more tools, increase concurrency, or wire the agent into a longer-running workflow.

The symptom is usually one of these:

  • Error: Agent stopped due to max iterations.
  • Error: Executor exceeded max iterations
  • repeated AgentExecutor.invoke() calls with no final answer

The Most Common Cause

The #1 cause is a tool that returns something the agent treats as an instruction to keep going. In practice, this happens when the tool output looks like another action request, or when your prompt does not clearly tell the model how to stop.

Here’s the broken pattern:

BrokenFixed
Tool returns ambiguous textTool returns structured, final data
Agent prompt does not require final answerPrompt explicitly instructs termination
No iteration cap or stop conditionmaxIterations and sane tool design
// ❌ Broken: tool output feeds back into the loop
import { AgentExecutor } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

const lookupCustomer = new DynamicStructuredTool({
  name: "lookup_customer",
  description: "Look up customer data",
  schema: z.object({ customerId: z.string() }),
  func: async ({ customerId }) => {
    // Bad: returning text that looks like an instruction
    return `Next step: call lookup_customer again for ${customerId}`;
  },
});

// The agent keeps seeing "Next step" and never finishes.
const executor = AgentExecutor.fromAgentAndTools(agent, [lookupCustomer]);

await executor.invoke({
  input: "Get customer status for C123",
});
// ✅ Fixed: return structured data and force a final answer
import { AgentExecutor } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

const lookupCustomer = new DynamicStructuredTool({
  name: "lookup_customer",
  description: "Look up customer data and return JSON only",
  schema: z.object({ customerId: z.string() }),
  func: async ({ customerId }) => {
    return JSON.stringify({
      customerId,
      status: "active",
      riskTier: 2,
    });
  },
});

const executor = AgentExecutor.fromAgentAndTools(agent, [lookupCustomer], {
  maxIterations: 5,
});

await executor.invoke({
  input:
    "Get customer status for C123. Use tools if needed, then provide a final answer.",
});

If you are using createOpenAIToolsAgent, createReactAgent, or another LangChain agent constructor, the same rule applies: tools should return facts, not follow-up instructions.

Other Possible Causes

1. Tool descriptions are too vague

If the model cannot tell when to use a tool versus answer directly, it will keep trying different calls.

// Bad
description: "Useful for customer operations"

// Better
description:
  "Fetches current customer account status by customerId. Use once per request."

2. The agent has no hard iteration limit

Without a cap, a bad loop can run until your runtime kills it.

const executor = AgentExecutor.fromAgentAndTools(agent, tools, {
  maxIterations: 3,
  returnIntermediateSteps: true,
});

If you see Agent stopped due to max iterations, that is not the root cause. It means LangChain protected you from an infinite loop.

3. A tool mutates shared state and re-triggers the same condition

This happens in production systems where tools write back to Redis, Postgres, or a queue and then your orchestration layer re-invokes the same agent.

// Bad: writes state that causes the same trigger again
await db.jobs.update({ id }, { status: "pending" });

// Better: mark completion explicitly
await db.jobs.update({ id }, { status: "completed", result });

If your agent is embedded in an event handler, check whether saving intermediate output is causing the same event to fire again.

4. Your prompt encourages endless refinement

Prompts like “keep improving until perfect” are dangerous in agents with tools.

// Bad prompt fragment
"Keep checking until you're fully certain."

// Better prompt fragment
"Use at most one tool call per relevant fact unless additional data is required."

LangChain agents need explicit stopping rules. Otherwise they interpret uncertainty as permission to continue.

How to Debug It

  1. Turn on intermediate steps

    • Use returnIntermediateSteps: true.
    • Inspect whether the same tool is called repeatedly with identical inputs.
  2. Log every tool input and output

    • If a tool returns text like “try again”, “next step”, or another action hint, fix that first.
    • You want deterministic outputs such as JSON or plain facts.
  3. Check your iteration settings

    • Look for maxIterations, timeouts, and any custom stop logic.
    • If the error is Agent stopped due to max iterations, confirm whether the loop is caused by bad tool output or missing termination instructions.
  4. Isolate one tool at a time

    • Remove all but one tool.
    • If looping stops, re-add tools until you find the one producing recursive behavior.

A good debugging setup looks like this:

const executor = AgentExecutor.fromAgentAndTools(agent, tools, {
  maxIterations: 5,
  returnIntermediateSteps: true,
});

const result = await executor.invoke({ input });

console.log(JSON.stringify(result.intermediateSteps, null, 2));

Look for patterns like:

  • same toolName repeated
  • same arguments repeated
  • no final response after several steps

Prevention

  • Make every tool return structured output where possible.
  • Set maxIterations on every production agent.
  • Write prompts that define when to stop and what counts as a final answer.
  • Add regression tests for repeated tool calls with identical inputs.
  • Treat any recursive event trigger as an architecture bug, not just an LLM issue.

If you are scaling LangChain agents in TypeScript across queues, webhooks, or background jobs, assume loops will happen unless you design against them. The fix is usually not “better prompting”; it is tighter tool contracts, explicit stopping rules, and hard runtime limits.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides