AutoGen Tutorial (Python): debugging agent loops for beginners
This tutorial shows you how to spot, reproduce, and fix agent loops in AutoGen Python conversations. You need this when your agents keep repeating the same messages, never hand off control, or burn tokens without making progress.
What You'll Need
- •Python 3.10+
- •
autogen-agentchatinstalled - •
autogen-extinstalled - •An OpenAI-compatible API key
- •Basic familiarity with
AssistantAgent,UserProxyAgent, andGroupChat - •A terminal and a small test project
Install the packages like this:
pip install autogen-agentchat autogen-ext openai
Set your API key before running anything:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start with a minimal loop-prone setup.
The fastest way to debug a loop is to reproduce it in a tiny script. Here we create two agents and let them talk in a controlled conversation so you can inspect every turn.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful assistant."
)
result = await agent.on_messages(
[TextMessage(content="Explain why agent loops happen.", source="user")],
cancellation_token=None,
)
print(result.chat_message.content)
if __name__ == "__main__":
asyncio.run(main())
- •Add explicit logging so you can see the loop pattern.
A loop usually shows up as repeated content, repeated tool calls, or the same handoff failing over and over. Log each message with its source and count turns instead of staring at raw model output.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent("assistant", model_client=model_client)
messages = [
TextMessage(content="Give me one short answer.", source="user"),
TextMessage(content="Now repeat it with no extra detail.", source="user"),
]
for i, msg in enumerate(messages, start=1):
print(f"\nTURN {i} | {msg.source}: {msg.content}")
result = await agent.on_messages([msg], cancellation_token=None)
print(f"REPLY {i} | assistant: {result.chat_message.content}")
if __name__ == "__main__":
asyncio.run(main())
- •Reproduce the loop with a group chat and a hard stop.
Most beginner bugs happen in multi-agent setups, not single-agent calls. Use a small RoundRobinGroupChat with a turn limit so the loop becomes visible instead of infinite.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
planner = AssistantAgent("planner", model_client=model_client)
reviewer = AssistantAgent("reviewer", model_client=model_client)
team = RoundRobinGroupChat(
[planner, reviewer],
termination_condition=MaxMessageTermination(6),
)
result = await team.run(task=TextMessage(content="Plan a login flow.", source="user"))
for msg in result.messages:
print(f"{msg.source}: {getattr(msg, 'content', '')}")
if __name__ == "__main__":
asyncio.run(main())
- •Fix the most common cause: weak instructions.
If an agent is allowed to “keep helping,” it often will. Make the stop condition explicit in the system message, then compare behavior before and after.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent(
"assistant",
model_client=model_client,
system_message=(
"Answer exactly once.\n"
"Do not repeat yourself.\n"
"If the request is complete, say 'DONE' and stop."
),
)
result = await agent.on_messages(
[TextMessage(content="Summarize what an agent loop is.", source="user")],
cancellation_token=None,
)
print(result.chat_message.content)
if __name__ == "__main__":
asyncio.run(main())
- •Add termination rules when you use teams.
In production, don’t rely on prompt discipline alone. Put guardrails in code with termination conditions so runaway conversations stop after a fixed number of messages or when a completion signal appears.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
planner = AssistantAgent("planner", model_client=model_client)
executor = AssistantAgent("executor", model_client=model_client)
termination = MaxMessageTermination(8) | TextMentionTermination("DONE")
team = RoundRobinGroupChat([planner, executor], termination_condition=termination)
result = await team.run(task=TextMessage(content="Draft steps for password reset.", source="user"))
print(f"Messages exchanged: {len(result.messages)}")
if __name__ == "__main__":
asyncio.run(main())
Testing It
Run each script separately and watch for repetition in the output. A healthy run should either finish quickly or stop at your termination condition without cycling through the same text.
If you still see loops, compare three things: the system message, the termination condition, and whether multiple agents are asking each other to continue forever. In practice, one of those three is usually missing or too vague.
For deeper debugging, print message histories and look for identical last-turn content across several iterations. That tells you whether the problem is prompt design, routing logic, or missing stop criteria.
Next Steps
- •Learn
SelectorGroupChatso you can control which agent speaks next instead of using round-robin. - •Add custom termination conditions for tool success, JSON completion markers, or business rules.
- •Instrument traces with structured logging so you can debug loops from production runs instead of local repros only.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit