LangChain Tutorial (TypeScript): debugging agent loops for advanced developers
This tutorial shows you how to trace, detect, and stop runaway agent loops in a LangChain TypeScript agent. You need this when an agent keeps calling tools with no progress, burns tokens, or gets stuck in a repeatable failure pattern that only shows up under real inputs.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
npmorpnpm - •An OpenAI API key
- •These packages:
- •
langchain - •
@langchain/openai - •
@langchain/core - •
zod - •
ts-nodeor a TypeScript build setup
- •
- •A terminal where you can set environment variables like
OPENAI_API_KEY
Step-by-Step
- •Start with a minimal agent that can loop.
You want a reproducible baseline before you debug anything. The example below creates a tool-calling agent with one intentionally simple tool so you can observe repeated calls clearly.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const lookupCustomer = new DynamicStructuredTool({
name: "lookup_customer",
description: "Look up a customer by email.",
schema: z.object({
email: z.string().email(),
}),
func: async ({ email }) => {
return JSON.stringify({ email, status: "active", tier: "gold" });
},
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a support agent. Use tools when needed."],
["human", "{input}"],
]);
const agent = await createToolCallingAgent({
llm,
tools: [lookupCustomer],
prompt,
});
const executor = new AgentExecutor({
agent,
tools: [lookupCustomer],
});
const result = await executor.invoke({
input: "Check the customer record for alice@example.com and keep verifying it.",
});
console.log(result);
- •Add hard limits so the loop becomes visible instead of infinite.
If an agent is looping, your first job is to cap the damage. Set max iterations and enable verbose logging so you can see the repeated action pattern in the console.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const lookupCustomer = new DynamicStructuredTool({
name: "lookup_customer",
description: "Look up a customer by email.",
schema: z.object({ email: z.string().email() }),
func: async ({ email }) => JSON.stringify({ email, status: "active" }),
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "Use tools only when they add new information."],
["human", "{input}"],
]);
const agent = await createToolCallingAgent({ llm, tools: [lookupCustomer], prompt });
const executor = new AgentExecutor({
agent,
tools: [lookupCustomer],
maxIterations: 3,
verbose: true,
});
await executor.invoke({
input: "Verify alice@example.com repeatedly until you're certain.",
});
- •Instrument tool calls so you can see exactly what repeats.
Verbose mode is not enough once you have multiple tools or nested calls. Wrap each tool with logging that prints arguments, latency, and output size; that makes loop detection much easier in production logs.
import { performance } from "node:perf_hooks";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
export const lookupCustomer = new DynamicStructuredTool({
name: "lookup_customer",
description: "Look up a customer by email.",
schema: z.object({ email: z.string().email() }),
func: async ({ email }) => {
const start = performance.now();
const result = JSON.stringify({ email, status: "active", tier: "gold" });
const ms = Math.round(performance.now() - start);
console.log("[tool] lookup_customer", {
email,
ms,
outputBytes: Buffer.byteLength(result),
});
return result;
},
});
- •Add an iteration-level guard that stops repeated tool use on the same input.
A practical loop bug often looks like the same tool called with the same arguments over and over. You can stop that by tracking normalized tool signatures and aborting when repetition crosses a threshold.
type ToolCall = {
name?: string;
args?: Record<string, unknown>;
};
export function makeLoopGuard(limitPerSignature = 2) {
const counts = new Map<string, number>();
return (call?: ToolCall) => {
if (!call?.name) return;
const signature = `${call.name}:${JSON.stringify(call.args ?? {})}`;
const nextCount = (counts.get(signature) ?? countZero()) + incrementOne();
counts.set(signature, nextCount);
if (nextCount > limitPerSignature) {
throw new Error(`Loop detected for ${signature}`);
}
function countZero() {
return Number(0);
}
function incrementOne() {
return Number(1);
}
console.log("[guard]", signature, nextCount);
return;
};
}
- •Reduce loop pressure with better stop conditions and explicit final-answer behavior.
Most loops happen because the model never gets a clean exit path. Tighten the system prompt and tell the executor to stop after enough evidence has been gathered rather than “keep checking.”
import { ChatPromptTemplate } from "@langchain/core/prompts";
export const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are an assistant that must finish once the answer is known. Do not repeat the same tool call unless new information changed.",
],
["human", "{input}"],
]);
Testing It
Run the script against an input that encourages repetition, like asking the agent to “keep verifying” or “double-check until certain.” In verbose mode, look for identical tool names and identical arguments appearing back-to-back; that’s your loop signature.
Then lower maxIterations to 1 or 2 and confirm the executor stops early instead of spinning. If you added the guard, verify it throws on repeated signatures and that your app surfaces that error cleanly in logs or tracing.
A good test also checks that normal requests still work. Ask for one lookup plus one summary so you know your loop protection didn’t break valid multi-step behavior.
Next Steps
- •Add LangSmith tracing so you can inspect intermediate steps across requests.
- •Replace raw repetition checks with per-tool semantic guards based on business rules.
- •Learn how to use structured outputs to force final answers instead of open-ended reasoning loops.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit