How to Fix 'agent infinite loop' in CrewAI (Python)

By Cyprian AaronsUpdated 2026-04-21
agent-infinite-loopcrewaipython

What the error means

agent infinite loop in CrewAI usually means an agent keeps re-running the same task without reaching a stopping condition. In practice, this shows up when the agent can’t satisfy the task, keeps asking tools for more info, or your task/crew config gives it no clean exit path.

You’ll usually hit this when using Crew, Agent, Task, and tools in Python with a setup like:

  • overly broad goals
  • missing max_iter
  • bad tool outputs
  • tasks that depend on each other incorrectly
  • agents that keep retrying because they never get a final answer

The Most Common Cause

The #1 cause is an agent that has no clear completion criteria and keeps calling tools or re-planning forever. In CrewAI, this often ends with errors like:

  • Agent stopped due to iteration limit or time limit
  • crewai.agent.agent.AgentExecutionError
  • repeated tool calls with no final answer
  • what people describe as an “infinite loop”

Here’s the broken pattern versus the fixed pattern.

BrokenFixed
No hard stop, vague task, tool can be called repeatedlyClear task outcome, bounded iterations, explicit final answer
Agent keeps reasoning foreverAgent exits after producing a concrete result
# BROKEN
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

researcher = Agent(
    role="Researcher",
    goal="Find everything about the company",
    backstory="You are a senior analyst.",
    tools=[search_tool],
    verbose=True,
)

task = Task(
    description="Research the company and keep looking until you know enough.",
    expected_output="A detailed report.",
    agent=researcher,
)

crew = Crew(
    agents=[researcher],
    tasks=[task],
    verbose=True,
)
result = crew.kickoff()
# FIXED
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

researcher = Agent(
    role="Researcher",
    goal="Produce a 1-page company summary with 5 verified facts",
    backstory="You are a senior analyst.",
    tools=[search_tool],
    verbose=True,
    max_iter=3,
)

task = Task(
    description=(
        "Research the company and return exactly 5 verified facts "
        "plus a 3-sentence summary. Stop once complete."
    ),
    expected_output="5 bullet points and a short summary.",
    agent=researcher,
)

crew = Crew(
    agents=[researcher],
    tasks=[task],
    verbose=True,
)
result = crew.kickoff()

The key changes:

  • max_iter=3 forces termination
  • the task asks for a finite output
  • the agent has a measurable completion target

If your prompt says “keep looking,” “be thorough,” or “research everything,” you’re inviting loop behavior.

Other Possible Causes

1) A tool returns unusable output

If your tool returns empty strings, malformed JSON, or irrelevant text, the agent may retry forever trying to parse it.

# BAD TOOL OUTPUT EXAMPLE
def bad_tool(query: str) -> str:
    return ""  # agent can't use this

# GOOD TOOL OUTPUT EXAMPLE
def good_tool(query: str) -> str:
    return '{"status":"ok","results":["item1","item2"]}'

Fix:

  • always return structured data when possible
  • validate tool responses before handing them back to the agent

2) Missing stop conditions in long-running agents

Some setups need explicit limits beyond max_iter, especially if you use custom orchestration.

agent = Agent(
    role="Claims Analyst",
    goal="Review claim documents",
    backstory="You analyze insurance claims.",
    tools=[search_tool],
    max_iter=5,
    max_execution_time=60,
)

If you omit both limits, an agent can keep cycling through reasoning steps.

3) Bad task chaining between agents

One agent may depend on another task that never produces the expected format.

# BROKEN: downstream task expects JSON but upstream returns prose
task_1 = Task(
    description="Summarize customer complaint.",
    expected_output="JSON object",
    agent=agent_1,
)

task_2 = Task(
    description="Use task_1 output to draft response.",
    context=[task_1],
    expected_output="Email draft",
    agent=agent_2,
)

Fix by making the handoff contract explicit:

task_1 = Task(
    description="Summarize customer complaint as JSON with keys: issue, severity, next_action.",
    expected_output='{"issue": "...", "severity": "...", "next_action": "..."}',
    agent=agent_1,
)

4) Tool choice is too open-ended

If an agent can call multiple tools but none of them resolve the task cleanly, it may bounce between them.

agent = Agent(
    role="Ops Assistant",
    goal="Investigate incident",
    tools=[db_tool, web_tool, slack_tool],
)

Narrow it down:

agent = Agent(
    role="Ops Assistant",
    goal="Investigate incident using only the incident DB",
    tools=[db_tool],
)

How to Debug It

  1. Turn on verbose logging

    • Set verbose=True on both Agent and Crew.
    • Look for repeated tool calls or identical reasoning steps.
  2. Check iteration and time limits

    • Confirm max_iter is set.
    • If supported in your version, set max_execution_time.
  3. Inspect tool outputs

    • Print raw tool responses.
    • Verify they are non-empty and match what the prompt expects.
  4. Simplify to one task and one tool

    • Remove delegation.
    • Remove extra tools.
    • If the loop disappears, add pieces back one by one until it returns.

A practical debug harness looks like this:

print("Task input:", task.description)
print("Tools:", [type(t).__name__ for t in researcher.tools])
print("Max iter:", researcher.max_iter)

If you see repeated logs like:

  • same search query over and over
  • same scratchpad content
  • no final answer emitted

then you’re dealing with either weak stopping criteria or broken tool feedback.

Prevention

  • Set hard limits on every production agent:

    • max_iter
    • max_execution_time
    • bounded task scope
  • Write tasks with measurable outputs:

    • exact number of bullets
    • JSON schema
    • one-paragraph summary plus fixed fields
  • Keep tool contracts strict:

    • return structured data
    • validate empty responses
    • avoid free-form text when downstream agents need machine-readable output

If you’re building banking or insurance workflows, treat every CrewAI agent like a bounded worker process. No infinite goals. No ambiguous handoffs. No unbounded retries.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides