LlamaIndex Tutorial (TypeScript): debugging agent loops for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
llamaindexdebugging-agent-loops-for-intermediate-developerstypescript

This tutorial shows you how to detect, inspect, and stop agent loops in a LlamaIndex TypeScript agent before they burn tokens or hang your workflow. You need this when your agent keeps calling the same tool, repeats the same reasoning path, or never reaches a final answer.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project initialized with npm init -y
  • llamaindex installed
  • An OpenAI API key set as OPENAI_API_KEY
  • A terminal for running the script
  • Basic familiarity with LlamaIndex agents and tools

Install the package:

npm install llamaindex
npm install -D typescript tsx @types/node

Set your environment variable:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a minimal agent that can loop.

The easiest way to debug loops is to reproduce them with a small tool that returns something predictable. Here we create a tool that always echoes input, which makes repeated calls obvious in the logs.

import { FunctionTool, OpenAI, ReActAgent } from "llamaindex";

const echoTool = FunctionTool.from(
  async ({ text }: { text: string }) => {
    return `echo:${text}`;
  },
  {
    name: "echo_tool",
    description: "Echoes the provided text back.",
    parameters: {
      type: "object",
      properties: {
        text: { type: "string" },
      },
      required: ["text"],
    },
  }
);

const llm = new OpenAI({ model: "gpt-4o-mini" });

const agent = new ReActAgent({
  tools: [echoTool],
  llm,
});
  1. Add trace logging around each agent run.

You want to know whether the model is reusing the same tool call, repeating the same prompt shape, or failing to produce a final answer. The simplest production pattern is to log a run ID plus the raw response payload.

async function runDebug(prompt: string) {
  const runId = crypto.randomUUID();
  console.log(`\n[run:${runId}] prompt=${prompt}`);

  const response = await agent.chat({
    message: prompt,
  });

  console.log(`[run:${runId}] response=${response.response}`);
}

await runDebug("Use the echo tool twice and then answer.");
  1. Put a hard stop on repeated tool behavior.

If you already know the loop pattern, fail fast instead of letting it continue. In practice, this means counting identical tool names or identical arguments across turns and aborting when they repeat.

import { ToolMetadata } from "llamaindex";

type ToolCallRecord = {
  name: string;
  input: string;
};

const seen = new Map<string, number>();

function trackToolCall(call: ToolCallRecord) {
  const key = `${call.name}:${call.input}`;
  const count = (seen.get(key) ?? 0) + 1;
  seen.set(key, count);

  if (count >= 3) {
    throw new Error(`Loop detected for ${key}`);
  }
}

const monitoredTool = FunctionTool.from(
  async ({ text }: { text: string }) => {
    trackToolCall({ name: "echo_tool", input: text });
    return `echo:${text}`;
  },
  {
    name: "echo_tool",
    description: "Echoes the provided text back.",
    parameters: {
      type: "object",
      properties: {
        text: { type: "string" },
      },
      required: ["text"],
    },
  }
);
  1. Force bounded execution with an iteration cap.

A common reason agents loop is that they are allowed too many reasoning steps. Set a ceiling so one bad plan cannot turn into an unbounded bill.

const boundedAgent = new ReActAgent({
  tools: [monitoredTool],
  llm,
});

async function runBounded(prompt: string) {
  try {
    const result = await boundedAgent.chat({
      message: prompt,
      maxIterations: 5,
    });

    console.log(result.response);
  } catch (err) {
    console.error("Agent stopped:", err instanceof Error ? err.message : err);
  }
}

await runBounded("Keep calling the echo tool until you are sure.");
  1. Inspect prompts and outputs when the loop happens.

When debugging real systems, you need more than “it looped.” You need the exact prompt shape and intermediate output so you can tell whether the model was confused by instructions, missing context, or receiving tool output that encouraged repetition.

async function inspectRun(prompt: string) {
  const result = await boundedAgent.chat({
    message: prompt,
    maxIterations: 5,
  });

  console.log("Final response:", result.response);
}

await inspectRun("Summarize why repeated tool calls are bad.");

Testing It

Run the script with npx tsx your-file.ts and watch for repeated tool-call patterns in your logs. If you trigger the loop guard, you should see your custom Loop detected error instead of an endless agent run. If you lower maxIterations, confirm that long-running prompts stop early and return a controlled failure mode. The main signal you want is simple: repeated behavior becomes visible before it becomes expensive.

Next Steps

  • Add structured logging for every tool invocation with request IDs and latency
  • Build a retry policy that distinguishes model errors from genuine loop detection
  • Learn how to use custom system prompts to reduce repetitive agent planning

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides