CrewAI Tutorial (Python): debugging agent loops for beginners
This tutorial shows you how to spot, reproduce, and fix agent loops in CrewAI Python projects. You need it when an agent keeps repeating the same tool call, never reaches a final answer, or burns through tokens because your task setup gives it no clean exit.
What You'll Need
- •Python 3.10+
- •
crewai - •
python-dotenv - •An OpenAI API key in your environment
- •A terminal with
pipinstalled - •A basic CrewAI project structure
- •Optional: a text editor with breakpoints/logging support
Step-by-Step
- •Start with a minimal project and make the loop easy to reproduce.
The fastest way to debug agent loops is to remove everything unrelated: one agent, one task, one tool, one crew.
pip install crewai python-dotenv
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process
load_dotenv()
agent = Agent(
role="Support Analyst",
goal="Answer the user's question clearly and stop when done",
backstory="You help users troubleshoot simple issues.",
verbose=True,
)
task = Task(
description="Explain what causes infinite loops in agent systems.",
expected_output="A concise explanation with practical debugging tips.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(result)
- •Add a tool that can cause repeated calls if the agent is not constrained.
Loops often appear when the model thinks it should keep querying tools instead of answering. A tiny tool makes this behavior obvious.
from crewai.tools import BaseTool
class EchoTool(BaseTool):
name: str = "echo_tool"
description: str = "Returns the input text unchanged."
def _run(self, text: str) -> str:
return f"echo: {text}"
tool = EchoTool()
agent_with_tool = Agent(
role="Support Analyst",
goal="Use tools only when needed and stop after a final answer",
backstory="You diagnose issues without repeating yourself.",
tools=[tool],
verbose=True,
)
task_with_tool = Task(
description="Use the tool once to inspect the issue, then give a final answer.",
expected_output="A short diagnosis and one recommendation.",
agent=agent_with_tool,
)
- •Put hard limits on iteration so you can tell whether the loop is in planning or execution.
In CrewAI, this is one of the first controls I set when debugging. If the agent still loops under a low limit, your prompt or task design is usually the problem.
crew_limited = Crew(
agents=[agent_with_tool],
tasks=[task_with_tool],
process=Process.sequential,
verbose=True,
max_iterations=3,
)
result = crew_limited.kickoff()
print(result)
- •Tighten the task instructions so the agent has an explicit stopping condition.
Many loops come from vague prompts like “analyze thoroughly” or “keep investigating.” Give the model a completion rule it can follow.
task_fixed = Task(
description=(
"Inspect the issue using at most one tool call. "
"If you have enough information, provide a final answer immediately. "
"Do not repeat the same reasoning."
),
expected_output="One paragraph diagnosis plus one next step.",
agent=agent_with_tool,
)
crew_fixed_prompt = Crew(
agents=[agent_with_tool],
tasks=[task_fixed],
process=Process.sequential,
verbose=True,
)
result = crew_fixed_prompt.kickoff()
print(result)
- •Add lightweight tracing around kickoff so you can see where repetition starts.
For beginners, plain prints are enough. You want to know whether the loop happens before any tool use, after tool use, or only during final response generation.
import time
start = time.time()
print("Starting crew kickoff...")
result = crew_fixed_prompt.kickoff()
end = time.time()
print(f"Finished in {end - start:.2f}s")
print("Result:")
print(result)
- •Use a debugging checklist when the loop persists.
If you still see repetition after limiting iterations and tightening prompts, inspect your tool output and task wording next.
debug_checks = [
"Does the task ask for investigation without a stopping rule?",
"Does the tool return useful data instead of vague text?",
"Is max_iterations set low enough to catch runaway behavior?",
"Is verbose output showing repeated identical actions?",
]
for item in debug_checks:
print(f"- {item}")
Testing It
Run the script with verbose=True enabled and watch for repeated tool calls or repeated reasoning blocks in the console output. If your first version loops, then your fixed version should stop much sooner and produce one final answer instead of cycling.
A good test is to compare runtime and output shape between max_iterations=3 and a looser configuration. If both versions still repeat themselves, your prompt probably contains conflicting instructions like “be thorough” and “keep checking.”
You should also confirm that your tool output is deterministic and short. Long or ambiguous tool responses often encourage agents to “try again” instead of finishing.
Next Steps
- •Learn how to use CrewAI callbacks and event hooks for deeper tracing.
- •Add unit tests around task descriptions that commonly trigger loops.
- •Study prompt design patterns for bounded reasoning and explicit stop conditions
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit