How to Fix 'agent infinite loop' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
agent-infinite-looplangchaintypescript

What the error means

agent infinite loop in LangChain usually means the agent kept calling tools or re-planning without ever reaching a final answer. In TypeScript, you’ll most often see this when an agent is wired with a tool that returns something the model can’t resolve into a stopping condition, or when the prompt never gives it a clean exit path.

It typically shows up with AgentExecutor, tool-calling agents, or custom loops where maxIterations is missing or too high. The symptom is the same: repeated tool calls, no AgentFinish, and eventually a hard stop or timeout.

The Most Common Cause

The #1 cause is a tool that returns data the agent cannot use to complete the task, combined with a prompt that encourages it to keep “checking” instead of answering.

This happens a lot when:

  • the tool returns raw JSON/string blobs with no clear result
  • the agent prompt says “use tools until you are certain”
  • AgentExecutor has no sane maxIterations
  • the tool output doesn’t match what the LLM expects

Broken vs fixed pattern

BrokenFixed
Tool returns verbose raw outputTool returns compact, final-answer-friendly output
Prompt encourages repeated checkingPrompt tells agent when to stop
No iteration capExplicit maxIterations
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const lookupCustomer = new DynamicStructuredTool({
  name: "lookup_customer",
  description: "Fetch customer details",
  schema: z.object({ customerId: z.string() }),
  func: async ({ customerId }) => {
    // Returns too much noise for the agent to reason about cleanly
    return JSON.stringify({
      customerId,
      status: "active",
      policies: [{ id: "P123", premium: 1200, notes: ["...lots of text..."] }],
      auditTrail: ["event1", "event2", "event3"],
    });
  },
});

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "Use tools until you are certain. Keep checking if needed."],
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const agent = await createToolCallingAgent({
  llm,
  tools: [lookupCustomer],
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [lookupCustomer],
});

await executor.invoke({ input: "Summarize customer C-1001" });
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const lookupCustomer = new DynamicStructuredTool({
  name: "lookup_customer",
  description: "Fetch customer details and return only fields needed for summarization",
  schema: z.object({ customerId: z.string() }),
  func: async ({ customerId }) => {
    const customer = {
      customerId,
      status: "active",
      policyCount: 1,
      totalPremiumUsd: 1200,
    };

    // Compact, deterministic output
    return `customerId=${customer.customerId}; status=${customer.status}; policyCount=${customer.policyCount}; totalPremiumUsd=${customer.totalPremiumUsd}`;
  },
});

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    "Use at most one tool call per question unless more data is strictly required. When you have enough information, answer directly.",
  ],
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const agent = await createToolCallingAgent({
  llm,
  tools: [lookupCustomer],
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [lookupCustomer],
  maxIterations: 3,
});

await executor.invoke({ input: "Summarize customer C-1001" });

The important change is not just “return less data.” It’s making the tool output easy for the model to convert into an AgentFinish.

Other Possible Causes

1) Missing stop condition in your executor

If you leave iteration control open-ended, a confused agent can keep cycling forever.

const executor = new AgentExecutor({
  agent,
  tools,
  maxIterations: undefined, // bad idea
});

Fix it:

const executor = new AgentExecutor({
  agent,
  tools,
  maxIterations: 3,
});

2) Tool descriptions are too vague

If your tool descriptions don’t say when to use them, the model may call them repeatedly.

new DynamicStructuredTool({
  name: "search_claims",
  description: "Search claims",
});

Better:

new DynamicStructuredTool({
  name: "search_claims",
  description:
    "Search claims by claim number only. Do not call again if you already have a matching claim record.",
});

3) Your tool keeps returning “try again” style responses

This creates an endless retry loop inside the agent reasoning chain.

func: async () => {
  return "No result found. Try another query.";
}

Prefer explicit terminal outputs:

func: async () => {
   return JSON.stringify({ found: false, reason: "no_match" });
}

Then teach the prompt how to handle found=false.

4) You’re using a parser/agent combo that doesn’t match your model

A mismatch between chat model capabilities and agent type can produce repeated invalid actions.

Example:

  • using a plain completion-style setup with a chat-native tool-calling flow
  • using an outdated parser with newer LangChain classes

Check that your stack matches:

  • ChatOpenAI
  • createToolCallingAgent
  • AgentExecutor
  • current @langchain/* package versions

How to Debug It

  1. Turn on verbose logging

    • Watch whether the same tool gets called repeatedly.
    • If you see identical action loops, it’s usually prompt/tool design.
    const executor = new AgentExecutor({
      agent,
      tools,
      maxIterations: 5,
      verbose: true,
    });
    
  2. Inspect tool outputs

    • Print exactly what each tool returns.
    • Look for huge blobs, nested JSON, or ambiguous text like “retry later.”
  3. Lower maxIterations

    • Set it to 2 or 3.
    • If it fails fast instead of looping forever, your issue is reasoning/termination logic.
  4. Test one tool at a time

    • Remove all but one tool.
    • If the loop disappears, one specific tool description or output format is causing it.

Prevention

  • Keep tool outputs small, structured, and deterministic.
  • Set maxIterations on every production AgentExecutor.
  • Write prompts that tell the agent when to stop and answer directly.
  • Add tests that assert an invocation finishes within N steps and does not repeat the same tool call twice in a row.

If you want this to stay stable in production, treat agents like distributed systems with bad inputs. Every loop needs an exit condition, every tool needs a contract, and every prompt needs boundaries.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides