AutoGen Tutorial (TypeScript): debugging agent loops for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogendebugging-agent-loops-for-intermediate-developerstypescript

This tutorial shows you how to diagnose and stop runaway agent loops in AutoGen TypeScript. You need this when a conversation keeps bouncing between agents, burning tokens, or never reaching a final answer.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key set as OPENAI_API_KEY
  • autogen-core and @autogen/openai installed
  • A terminal with ts-node or a TypeScript build setup
  • Basic familiarity with AutoGen agents, model clients, and message passing

Step-by-Step

  1. Start with a minimal agent that can loop forever if you don’t constrain it. The point here is to reproduce the problem first, because debugging agent loops without a reproducible case is guesswork.
import { AssistantAgent, UserMessage } from "autogen-core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const client = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = new AssistantAgent({
  name: "support_agent",
  modelClient: client,
  systemMessage: "You are a support assistant. Be concise.",
});

async function main() {
  const result = await agent.run(
    [{ content: "Help me debug why my workflow keeps repeating.", source: "user" }],
    { maxTurns: 10 }
  );

  console.log(result.messages.map((m) => `${m.source}: ${m.content}`).join("\n"));
}

main();
  1. Add explicit loop detection by inspecting repeated assistant outputs. In production, don’t wait for token usage to tell you something is wrong; detect repeated content, repeated tool calls, or repeated state transitions early.
function hasRepeatedMessages(messages: { source: string; content: string }[]) {
  const seen = new Set<string>();

  for (const message of messages) {
    const key = `${message.source}:${message.content.trim()}`;
    if (seen.has(key)) return true;
    seen.add(key);
  }

  return false;
}

async function debugRun(agent: AssistantAgent) {
  const result = await agent.run(
    [{ content: "Keep asking clarifying questions forever.", source: "user" }],
    { maxTurns: 8 }
  );

  const repeated = hasRepeatedMessages(
    result.messages.map((m) => ({ source: m.source, content: String(m.content) }))
  );

  console.log("Repeated loop detected:", repeated);
}
  1. Put hard limits on the conversation and make the agent explain its stopping condition. A lot of loops happen because the model never gets told what “done” looks like, so give it an exit rule in the system prompt and enforce it with turn limits.
const boundedAgent = new AssistantAgent({
  name: "bounded_support_agent",
  modelClient: client,
  systemMessage:
    [
      "You are a support assistant.",
      "Answer directly.",
      "If you have enough information, provide a final answer and stop.",
      "If the user request cannot be completed, say what is missing and stop.",
      "Never ask more than one clarifying question.",
    ].join(" "),
});

async function runBounded() {
  const result = await boundedAgent.run(
    [{ content: "My job is stuck in a retry loop. What should I check?", source: "user" }],
    { maxTurns: 4 }
  );

  console.log(result.messages.at(-1));
}
  1. Instrument every turn so you can see where the loop starts. For intermediate debugging, logging the last message after each turn is usually enough to spot whether the agent is repeating itself, waiting on tools that never resolve, or rephrasing the same question.
async function traceConversation(agent: AssistantAgent) {
  const result = await agent.run(
    [{ content: "Diagnose why this workflow repeats step two.", source: "user" }],
    { maxTurns: 6 }
  );

  for (let i = 0; i < result.messages.length; i++) {
    const msg = result.messages[i];
    console.log(`[${i}] ${msg.source}: ${String(msg.content)}`);
  }

  const lastTwo = result.messages.slice(-2).map((m) => String(m.content));
  console.log("Last two messages identical:", lastTwo[0] === lastTwo[1]);
}
  1. If your loop involves tools, separate model failure from tool failure. In practice, many “agent loops” are actually tool retries caused by bad arguments or inconsistent tool output, so validate inputs and return stable outputs before blaming the planner.
type ToolResult = { ok: boolean; data?: string; error?: string };

function lookupPolicy(policyId?: string): ToolResult {
  if (!policyId) {
    return { ok: false, error: "policyId is required" };
  }

  return { ok: true, data: `Policy ${policyId}: active` };
}

async function safeToolPattern() {
  const first = lookupPolicy(undefined);
  console.log(first);

  const second = lookupPolicy("POL-12345");
  console.log(second);
}

Testing It

Run the script against a few prompts that normally trigger repetition, like “keep asking until you’re sure” or “debug this workflow step by step.” If your turn limit works, the conversation should end cleanly instead of continuing indefinitely.

Then inspect the logs for three things:

  • repeated assistant text
  • repeated clarifying questions
  • repeated tool failures with identical arguments

If you still see looping behavior after adding maxTurns, your problem is usually in prompt design or tool contract design, not AutoGen itself.

Next Steps

  • Add structured tracing with message IDs and turn counters so you can correlate loops across services.
  • Move from raw text responses to schema-constrained outputs for decisions that must terminate.
  • Learn how to design tools that are idempotent and return deterministic errors instead of retry bait.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides