How to Fix 'agent infinite loop in production' in CrewAI (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
agent-infinite-loop-in-productioncrewaitypescript

When CrewAI says agent infinite loop in production, it usually means one of your agents keeps getting re-invoked without making progress. In TypeScript, this shows up most often when a task never reaches a terminal state, or when an agent is allowed to call itself through the same tool/task path.

In practice, this is not a “CrewAI bug” 90% of the time. It’s a control-flow problem: bad termination conditions, recursive task delegation, or tools that keep returning the same unresolved output.

The Most Common Cause

The #1 cause is an agent that can repeatedly delegate back into the same work without a hard stop. In CrewAI TypeScript setups, this often happens when you let an agent both plan and execute, then give it a tool that can re-trigger the same crew or task.

Here’s the broken pattern:

BrokenFixed
Agent can recurse into the same workflowAgent has bounded scope and explicit stop condition
Tool returns “try again” foreverTool returns final structured output or hard failure
// ❌ Broken: agent can re-enter the same flow indefinitely
import { Agent, Task, Crew } from "crewai";

const researcher = new Agent({
  name: "researcher",
  role: "Researcher",
  goal: "Keep researching until complete",
  tools: [async () => {
    // This is a bad pattern if it calls back into the same crew/task path.
    return await runResearchCrewAgain();
  }],
});

const task = new Task({
  description: "Research customer issue and summarize findings",
  agent: researcher,
});

const crew = new Crew({
  agents: [researcher],
  tasks: [task],
});

await crew.kickoff();
// ✅ Fixed: bounded execution with explicit terminal output
import { z } from "zod";
import { Agent, Task, Crew } from "crewai";

const ResearchResultSchema = z.object({
  summary: z.string(),
  confidence: z.number().min(0).max(1),
});

const researcher = new Agent({
  name: "researcher",
  role: "Researcher",
  goal: "Produce one research summary and stop",
  tools: [async (query: string) => {
    const result = await searchKnowledgeBase(query);

    return ResearchResultSchema.parse({
      summary: result.summary,
      confidence: result.confidence,
    });
  }],
});

const task = new Task({
  description: "Research customer issue and return one final summary.",
  agent: researcher,
});

const crew = new Crew({
  agents: [researcher],
  tasks: [task],
});

await crew.kickoff();

The key fix is simple:

  • Don’t let an agent invoke the same crew path from inside a tool.
  • Make tool outputs terminal and structured.
  • Add a max iteration / max retries cap if your setup supports it.

If you see logs like Agent exceeded maximum iterations or repeated Task started entries for the same task ID, this is likely your problem.

Other Possible Causes

1) Missing termination criteria in the prompt

If your agent is told to “keep trying until successful,” it may never decide it’s done.

// ❌ Bad
goal: "Keep refining until perfect"

// ✅ Better
goal: "Return a final answer after at most one refinement pass"

Use explicit stop language:

instructions: [
  "Return FINAL only once you have enough evidence.",
  "Do not retry more than once.",
]

2) Tool returns ambiguous output

If your tool returns text like "working on it" or "need more info", the model may keep calling it.

// ❌ Bad
return "Need more info";

// ✅ Better
return {
  status: "done",
  data: { ... },
};

Make tools deterministic. If they fail, throw a real error instead of soft-failing forever.

3) Recursive delegation between agents

This happens when Agent A delegates to B, and B delegates back to A with no guardrail.

// ❌ Bad pattern
agentA.tools = [delegateToAgentB];
agentB.tools = [delegateToAgentA];

// ✅ Better pattern
agentA.tools = [delegateToAgentBOnce];
agentB.tools = [];

If you need multi-step orchestration, use a supervisor pattern with strict handoff rules instead of mutual recursion.

4) Memory or state keeps resetting

If each loop clears context, the agent never sees progress and repeats the same action.

// ❌ Bad
await crew.kickoff({ resetMemory: true });

// ✅ Better
await crew.kickoff({ resetMemory: false });

Also check whether you’re recreating Agent and Task instances on every retry. That can erase state and make loops harder to detect.

How to Debug It

  1. Inspect the last repeated action

    • Look for the same tool call, same task ID, or same prompt fragment repeating.
    • If you see identical outputs across iterations, you have a loop source.
  2. Disable tools one by one

    • Start with no tools.
    • Re-enable them individually until the loop returns.
    • The offending tool is often a recursive caller or soft-failing API wrapper.
  3. Add hard iteration limits

    • Set max retries / max iterations wherever CrewAI exposes it.
    • If the error disappears when capped, your workflow lacks termination logic.
  4. Log structured state

    • Log task status, tool input/output, and iteration count.
    • You want to see whether progress is happening or whether the agent is spinning on identical state.

Example diagnostic logging:

console.log({
  taskId,
  iteration,
  toolName,
  inputHash,
});

If inputHash stays constant across iterations, your agent is not making forward progress.

Prevention

  • Put explicit caps on loops:

    • max iterations
    • max retries
    • max delegation depth
  • Make every tool return one of these:

    • valid structured success payload
    • thrown error
    • terminal “no result” response with clear exit handling
  • Keep orchestration outside the agent:

    • let agents do work
    • let your app decide when to retry, branch, or stop

If you’re building production AI systems with CrewAI TypeScript, treat infinite loops as a control-plane bug first. Fix recursion boundaries, add hard stops, then tighten prompts and tool contracts after that.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides