CrewAI Tutorial (TypeScript): debugging agent loops for intermediate developers
This tutorial shows you how to stop a CrewAI agent from spinning in loops, add observability around each turn, and put hard guardrails in place so the loop ends when it should. You need this when an agent keeps re-asking for the same tool call, repeats the same reasoning, or burns tokens without making progress.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
crewaiinstalled in your project - •
dotenvfor environment variables - •An LLM API key set in
.env - •A basic CrewAI TypeScript project with
tsconfig.json - •One tool or task that can be used to reproduce the loop
Step-by-Step
- •Start by setting up a minimal project with strict logging enabled. For debugging loops, you want deterministic output and no hidden state.
npm init -y
npm install crewai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
- •Create a baseline agent and task, but keep the goal narrow. Loops usually happen when the task is vague, so give the agent one concrete outcome and a low iteration budget.
import "dotenv/config";
import { Agent, Task, Crew } from "crewai";
const agent = new Agent({
name: "LoopDebugAgent",
role: "Support analyst",
goal: "Summarize the latest incident note in one paragraph",
backstory: "You are precise and stop after producing one answer.",
verbose: true,
maxIter: 3,
});
const task = new Task({
description: "Read the incident note and summarize it once.",
expectedOutput: "A single concise incident summary.",
agent,
});
const crew = new Crew({
agents: [agent],
tasks: [task],
});
await crew.kickoff();
- •Add a trace wrapper around your run so you can see whether the model is repeating itself or re-entering the same tool path. In practice, this is where you catch missing stop conditions and prompts that invite endless refinement.
import "dotenv/config";
import { Agent, Task, Crew } from "crewai";
function logStep(label: string, data: unknown) {
console.log(`\n=== ${label} ===`);
console.log(JSON.stringify(data, null, 2));
}
const agent = new Agent({
name: "LoopDebugAgent",
role: "Support analyst",
goal: "Summarize the latest incident note in one paragraph",
backstory: "You are precise and stop after producing one answer.",
verbose: true,
maxIter: 3,
});
const task = new Task({
description: "Read the incident note and summarize it once.",
expectedOutput: "A single concise incident summary.",
agent,
});
logStep("CONFIG", { maxIter: 3, verbose: true });
const crew = new Crew({ agents: [agent], tasks: [task] });
const result = await crew.kickoff();
logStep("RESULT", result);
- •If the loop is caused by tool use, constrain the tool contract hard. The usual failure mode is an agent calling a tool repeatedly because the output format is ambiguous or the tool can return partial data forever.
import { Tool } from "crewai";
export const getIncidentNote = new Tool({
name: "get_incident_note",
description: "Returns exactly one incident note as plain text.",
execute: async () => {
return "INCIDENT-42: Payment webhook failed at 10:14 UTC due to timeout.";
},
});
import { Agent } from "crewai";
export const toolAgent = new Agent({
name: "ToolBoundAgent",
role: "Incident summarizer",
goal:
"Fetch the incident note once and produce a final summary without repeating tool calls.",
backstory:
"You must not call any tool more than once unless explicitly required.",
tools: [getIncidentNote],
verbose: true,
maxIter: 2,
});
- •Add an explicit stop rule in the prompt. This is the simplest fix for self-reinforcing loops because it tells the model what completion looks like and what to do when it starts repeating.
import { Task } from "crewai";
export const guardedTask = new Task({
description:
[
"Fetch the incident note once.",
"If you have already used the tool once, do not call it again.",
'If your answer starts repeating previous content, immediately output only "[STOPPED_LOOP]".',
"Return exactly one final summary sentence.",
].join(" "),
expectedOutput:
'One sentence summary or "[STOPPED_LOOP]" if repetition begins.',
});
- •Finally, test with a deliberately bad prompt so you can confirm your guardrails work under failure conditions. A good debugging setup should fail fast instead of running forever.
import { Agent, Crew } from "crewai";
import { getIncidentNote } from "./tools";
import { guardedTask } from "./task";
const debugAgent = new Agent({
name: "DebugAgent",
role: "Incident summarizer",
goal:
"Keep refining the summary until it is perfect and fully complete.", // intentionally risky
backstory:
'You must stop if you detect repetition or if asked to repeat work.',
tools: [getIncidentNote],
verbose: true,
maxIter: 2,
});
const crew = new Crew({
agents: [debugAgent],
});
await crew.kickoff([guardedTask]);
Testing It
Run your script with tsx and watch for repeated tool calls or identical intermediate outputs. If maxIter is working, execution should stop after a small number of reasoning cycles instead of hanging.
Then intentionally make the task vague and compare behavior against your guarded version. The guarded version should either finish cleanly or emit [STOPPED_LOOP], while the unguarded version will usually show repeated reasoning in verbose logs.
If you have access to traces or logs from your runtime, check for these signals:
- •Same tool called multiple times with identical arguments
- •Same assistant message repeated across turns
- •No progression toward a final answer after step one or two
If those signals appear, tighten either the task wording, maxIter, or tool contract before adding more complexity.
Next Steps
- •Add structured tracing with OpenTelemetry so you can correlate loops across agents and tools.
- •Introduce memory limits and retrieval filters to prevent stale context from triggering repeat behavior.
- •Build a regression test that asserts max iterations and rejects repeated assistant messages.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit