AutoGen Tutorial (TypeScript): debugging agent loops for beginners

By Cyprian AaronsUpdated 2026-04-21
autogendebugging-agent-loops-for-beginnerstypescript

This tutorial shows you how to spot, instrument, and stop runaway agent loops in AutoGen TypeScript. You need this when an agent keeps re-planning, repeating the same tool call, or never returns a final answer.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project with ts-node or tsx
  • AutoGen packages:
    • @autogen/core
    • @autogen/openai
  • An OpenAI API key set as OPENAI_API_KEY
  • A terminal where you can run the script repeatedly
  • Basic familiarity with AutoGen agents and model clients

Install the dependencies:

npm install @autogen/core @autogen/openai
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start with a minimal agent that can loop.

The first debugging mistake is adding too much logic before you can reproduce the bug. Keep the agent simple, then observe whether it stops on its own or keeps calling the same model path.

import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = new AssistantAgent({
  name: "support_agent",
  modelClient,
  systemMessage: "You are a support assistant. Answer briefly.",
});

const result = await agent.run({
  task: "Explain what an overdraft fee is in one sentence.",
});

console.log(result.messages.at(-1)?.content);
  1. Add hard limits so a loop cannot run forever.

If your setup uses multi-agent handoffs or tool use, you need a guardrail before debugging anything else. A max turn limit gives you a clean failure mode instead of burning tokens until timeout.

import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = new AssistantAgent({
  name: "support_agent",
  modelClient,
  systemMessage: [
    "You are a support assistant.",
    "Never repeat the same answer twice.",
    "Stop after one complete response.",
  ].join(" "),
});

const result = await agent.run({
  task: "Summarize mortgage pre-approval in one paragraph.",
  maxTurns: 3,
});

console.log(result.messages.map((m) => `${m.role}: ${m.content}`).join("\n"));
  1. Instrument every turn so you can see where repetition starts.

You do not debug loops by reading the final answer. You debug them by logging each message and checking whether the content, role, or tool pattern repeats across turns.

import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = new AssistantAgent({
  name: "debug_agent",
  modelClient,
  systemMessage: "Be concise and do not repeat yourself.",
});

const result = await agent.run({ task: "Explain deductible vs copay." });

for (const [index, message] of result.messages.entries()) {
  console.log(`--- Turn ${index + 1} ---`);
  console.log(`role=${message.role}`);
  console.log(message.content);
}
  1. Detect repeated outputs and fail fast.

A beginner-friendly loop detector does not need fancy embeddings. For most cases, comparing normalized text across turns is enough to catch “same answer again” behavior early.

function normalize(text: string): string {
  return text.toLowerCase().replace(/\s+/g, " ").trim();
}

function hasRepeatedContent(messages: Array<{ content?: string }>): boolean {
  const seen = new Set<string>();

  for (const message of messages) {
    if (!message.content) continue;
    const content = normalize(message.content);
    if (seen.has(content)) return true;
    seen.add(content);
  }

  return false;
}

if (hasRepeatedContent(result.messages)) {
    throw new Error("Loop detected: repeated assistant content.");
}
  1. Add a stop condition in your orchestration code.

In production, you usually do not rely on the model to decide when to stop. Wrap the run in your own control flow so you can terminate on repeated state, repeated tool calls, or too many turns.

import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const client = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

const agent = new AssistantAgent({
  name: "bounded_agent",
  modelClient: client,
});

let lastAnswer = "";
for (let i = 0; i < 3; i++) {
  const result = await agent.run({ task: "Give one sentence about APR." });
  const answer = String(result.messages.at(-1)?.content ?? "");

  if (answer === lastAnswer) {
    throw new Error(`Stuck loop detected at iteration ${i + 1}`);
  }

  lastAnswer = answer;
}
console.log(lastAnswer);

Testing It

Run the script with a prompt that previously caused repetition, then watch the turn logs. If the same content appears twice in a row, your detector should fail before the loop grows.

Try two cases:

  • A normal question that should finish in one response
  • A prompt that triggers rephrasing or self-correction and used to loop

If your guardrails work, you will either get one clean answer or an explicit error like Loop detected. That is what you want during debugging because it turns an invisible failure into a visible one.

Next Steps

  • Add tool-call logging so you can detect repeated function invocations, not just repeated text
  • Learn how to use structured outputs to reduce ambiguous “keep going” behavior
  • Build per-agent metrics for turn count, token usage, and duplicate-message rate

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides