AutoGen Tutorial (TypeScript): debugging agent loops for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogendebugging-agent-loops-for-advanced-developerstypescript

This tutorial shows you how to instrument an AutoGen TypeScript agent loop so you can see why it repeats, stalls, or hands off incorrectly. You need this when a multi-agent workflow looks fine in logs at a high level, but one agent keeps re-entering the same state and burning tokens.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with ts-node or tsx
  • AutoGen packages:
    • @autogenai/autogen
    • openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with:
    • AssistantAgent
    • UserProxyAgent
    • GroupChat or a simple two-agent loop
  • A terminal with environment variables enabled

Step-by-Step

  1. Start with a minimal loop that can reproduce the problem.
    Don’t debug your production graph first; isolate the smallest conversation that still loops.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

const user = new UserProxyAgent({
  name: "user",
});

async function main() {
  const result = await user.initiateChat(assistant, {
    message: "Write a short JSON schema for an invoice.",
    maxTurns: 6,
  });

  console.log(result);
}

main().catch(console.error);
  1. Add explicit turn-by-turn logging around every message exchange.
    Most loop bugs are obvious once you print the speaker, role, and content length on each turn instead of only printing final output.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

const user = new UserProxyAgent({ name: "user" });

async function main() {
  let turns = 0;
  const result = await user.initiateChat(assistant, {
    message: "Draft a validation checklist for invoice JSON.",
    maxTurns: 4,
    onTurnEnd: (turn) => {
      turns += 1;
      console.log(
        `[turn ${turns}] ${turn.speaker}: ${String(turn.message).slice(0, 120)}`
      );
    },
  });

  console.log("final:", result);
}

main().catch(console.error);
  1. Detect repeated content and stop the loop early.
    If an agent is producing nearly identical responses across turns, you want to fail fast and capture the state that caused it.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

const user = new UserProxyAgent({ name: "user" });

async function main() {
  const seen = new Set<string>();

  await user.initiateChat(assistant, {
    message: "Explain how to validate an invoice schema in three steps.",
    maxTurns: 8,
    onTurnEnd: (turn) => {
      const text = String(turn.message).trim();
      const fingerprint = text.toLowerCase().replace(/\s+/g, " ").slice(0, 200);

      if (seen.has(fingerprint)) {
        throw new Error(`Loop detected on repeated output from ${turn.speaker}`);
      }

      seen.add(fingerprint);
      console.log(`[${turn.speaker}] ${text}`);
    },
  });
}

main().catch(console.error);
  1. Inspect the exact prompt context before each agent call.
    In practice, loops often come from hidden context drift: stale instructions, duplicated system messages, or tool output being re-injected as if it were user input.
import { AssistantAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

async function main() {
  const messages = [
    { role: "system" as const, content: "You are a concise assistant." },
    { role: "user" as const, content: "Summarize this invoice schema in one paragraph." },
    { role: "assistant" as const, content: '{"fields":["id","amount","currency"]}' },
    { role: "user" as const, content: "Now validate it again." },
  ];

  console.log(JSON.stringify(messages, null, 2));

  const reply = await assistant.generateReply(messages);
  console.log(reply.content);
}

main().catch(console.error);
  1. Add a hard stop based on token growth or turn count.
    Production systems need guardrails even after you fix the root cause; otherwise one bad prompt can still run up cost and latency.
import { AssistantAgent, UserProxyAgent } from "@autogenai/autogen";

const assistant = new AssistantAgent({
  name: "assistant",
  modelClient: {
    apiKey: process.env.OPENAI_API_KEY!,
    model: "gpt-4o-mini",
  },
});

const user = new UserProxyAgent({ name: "user" });

async function main() {
  let totalChars = 0;

  await user.initiateChat(assistant, {
    message: "Give me a compact implementation plan for invoice validation.",
    maxTurns: Math.min(5, Number(process.env.MAX_TURNS ?? "5")),
    onTurnEnd: (turn) => {
      totalChars += String(turn.message).length;
      console.log(`[${turn.speaker}] chars=${totalChars}`);

      if (totalChars > 4000) {
        throw new Error("Conversation exceeded safe size");
      }
    },
  });
}

main().catch(console.error);

Testing It

Run the script with OPENAI_API_KEY set and verify that every turn prints a speaker label and truncated content. If you intentionally prompt for something ambiguous like “keep improving this,” you should see whether the agent starts repeating itself within a few turns.

Next, force a loop by feeding back the same assistant output as the next user message and confirm your duplicate-detection guard throws immediately. Then lower maxTurns and verify that runaway conversations terminate cleanly instead of hanging.

If you’re debugging a real workflow, compare the logged prompt context from step four against the actual messages sent by your orchestration layer. Most agent loops come from prompt contamination or missing termination conditions, not from the model “getting stuck” by itself.

Next Steps

  • Add structured tracing with OpenTelemetry so each turn carries request IDs and token counts.
  • Move loop detection into a reusable middleware layer for all agents.
  • Test tool-call workflows separately from pure chat loops so you can isolate whether the bug is in prompting or tool routing.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides