LangChain Tutorial (TypeScript): debugging agent loops for beginners
This tutorial shows you how to spot, log, and stop runaway agent loops in a LangChain TypeScript app. You need this when an agent keeps calling tools forever, repeats the same action, or burns tokens without making progress.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step - •
langchain - •
@langchain/openai - •An OpenAI API key in
OPENAI_API_KEY - •A terminal where you can run the script and inspect logs
Install the packages:
npm install langchain @langchain/openai
npm install -D typescript ts-node @types/node
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •
Start with a minimal agent that can loop.
The point here is not to build a useful assistant yet. It’s to create a controlled example where you can observe repeated tool calls and understand where the loop comes from.
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "langchain/agents";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const pingTool = tool(
async ({ input }) => {
console.log("[tool] ping called with:", input);
return `pong: ${input}`;
},
{
name: "ping",
description: "Echoes the input back.",
schema: z.object({
input: z.string(),
}),
}
);
- •
Add a second tool that makes looping easier to detect.
A common failure mode is an agent getting stuck between two tools or repeating the same one. This setup gives you enough surface area to see that behavior clearly in logs.
const statusTool = tool(
async ({ value }) => {
console.log("[tool] status called with:", value);
return `status=${value}`;
},
{
name: "status",
description: "Returns a simple status string.",
schema: z.object({
value: z.string(),
}),
}
);
const tools = [pingTool, statusTool];
- •
Build the agent with a hard iteration cap.
This is your first real defense against loops. If the model keeps reasoning forever,
maxIterationsstops it before it becomes an incident.
const agent = await createReactAgent({
llm,
tools,
});
const result = await agent.invoke(
{
messages: [
{
role: "user",
content:
"Use the tools to figure out whether the system is healthy, then answer.",
},
],
},
{
configurable: {
maxIterations: 4,
},
}
);
console.log("FINAL RESULT:");
console.log(JSON.stringify(result, null, 2));
- •
Add explicit loop debugging with callbacks.
When an agent loops, you need visibility into every tool call and model decision. The simplest production pattern is to log chain starts, tool starts, tool ends, and errors in one place.
import type { BaseCallbackHandlerMethods } from "@langchain/core/callbacks/base";
class DebugHandler implements Partial<BaseCallbackHandlerMethods> {
handleToolStart(tool) {
console.log(`[debug] tool start: ${tool.name}`);
}
handleToolEnd(output) {
console.log(`[debug] tool end: ${String(output).slice(0, 200)}`);
}
handleLLMStart(_llm, prompts) {
console.log("[debug] llm start");
console.log(prompts[0]?.slice(0, 500));
}
handleLLMEnd() {
console.log("[debug] llm end");
}
handleChainError(err) {
console.error("[debug] chain error:", err);
}
}
- •
Wire the handler into execution and print every intermediate step.
Intermediate steps are what you inspect when debugging loops. If you only look at the final answer, you miss the repeated actions that caused the problem.
const debugHandler = new DebugHandler();
const debugResult = await agent.invoke(
{
messages: [
{
role: "user",
content:
"Use the tools to figure out whether the system is healthy, then answer.",
},
],
},
{
callbacks: [debugHandler],
configurable: {
maxIterations: 4,
},
}
);
console.log("INTERMEDIATE STEPS:");
for (const step of debugResult.intermediateSteps ?? []) {
console.log(JSON.stringify(step, null, 2));
}
- •
Add a loop guard based on repeated tool calls.
In real systems, I don’t rely on iteration limits alone. I also track repeated tool names and inputs so I can fail fast when the agent is clearly stuck on the same path.
type ToolCall = { name?: string; args?: unknown };
function detectRepeat(calls: ToolCall[]) {
const seen = new Map<string, number>();
for (const call of calls) {
const key = `${call.name}:${JSON.stringify(call.args)}`;
const count = (seen.get(key) ?? 0) + 1;
seen.set(key, count);
if (count >= 2) return key;
}
return null;
}
const repeatKey = detectRepeat([
{ name: "ping", args: { input: "healthy?" } },
{ name: "ping", args: { input: "healthy?" } },
]);
if (repeatKey) {
throw new Error(`Loop detected on repeated call: ${repeatKey}`);
}
Testing It
Run the script and watch for three things in the terminal: LLM starts, tool starts/ends, and whether execution stops after your iteration limit. If the agent loops, you should see repeated tool calls with similar inputs before it terminates or throws.
Then change the user prompt to something more specific, like “Call each tool once and summarize the result.” If that finishes cleanly while your vague prompt loops or stalls, you’ve confirmed this was a planning problem rather than a broken runtime.
If you want a stronger test, temporarily make one tool return a misleading response like "try again". A bad agent will often keep following that breadcrumb forever unless your loop guard catches it.
Next Steps
- •Add LangSmith tracing so you can inspect full agent traces instead of terminal logs.
- •Implement per-tool timeouts and retries for flaky integrations.
- •Move from simple repeat detection to policy-based stopping rules using allowed tool sequences.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit