LangChain Tutorial (TypeScript): debugging agent loops for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchaindebugging-agent-loops-for-intermediate-developerstypescript

This tutorial shows you how to trace, inspect, and stop runaway agent loops in a LangChain TypeScript app. You need this when an agent keeps calling tools forever, repeats the same action, or fails to return a final answer because your prompts, tools, or stop conditions are off.

What You'll Need

  • Node.js 18+
  • A TypeScript project with tsconfig.json
  • langchain
  • @langchain/openai
  • dotenv
  • An OpenAI API key in OPENAI_API_KEY
  • Optional but useful:
    • zod for structured tool inputs
    • ts-node or tsx to run TypeScript directly

Install the packages:

npm install langchain @langchain/openai dotenv
npm install -D typescript tsx @types/node

Step-by-Step

  1. First, create a minimal agent setup with one tool and a hard stop on iterations. The point is not to build a fancy agent; it’s to make loop behavior visible and bounded from the start.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const weatherTool = new DynamicStructuredTool({
  name: "get_weather",
  description: "Get the weather for a city",
  schema: z.object({ city: z.string() }),
  func: async ({ city }) => `Weather in ${city}: sunny, 24C`,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant. Use tools when needed."],
  ["human", "{input}"],
  new MessagesPlaceholder("agent_scratchpad"),
]);

const agent = await createOpenAIToolsAgent({
  llm: model,
  tools: [weatherTool],
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [weatherTool],
  maxIterations: 3,
});
  1. Next, run a normal query and inspect the returned steps. If the agent is looping, the intermediate steps tell you which tool calls repeat and whether the model keeps asking for more context instead of finishing.
const result = await executor.invoke(
  { input: "What's the weather in Nairobi?" },
  { returnIntermediateSteps: true }
);

console.log("FINAL OUTPUT:", result.output);
console.log("STEP COUNT:", result.intermediateSteps.length);

for (const [index, step] of result.intermediateSteps.entries()) {
  console.log(`\nSTEP ${index + 1}`);
  console.log("ACTION:", step.action.tool);
  console.log("INPUT:", step.action.toolInput);
  console.log("OBSERVATION:", step.observation);
}
  1. Then add an explicit loop detector so you can fail fast when the same tool input repeats. This catches cases where the model is stuck reissuing identical calls because your prompt does not give it enough state to conclude.
type ToolCallKey = string;

function detectRepeatedCalls(steps: Array<{ action: { tool: string; toolInput: unknown } }>) {
  const seen = new Set<ToolCallKey>();

  for (const step of steps) {
    const key = `${step.action.tool}:${JSON.stringify(step.action.toolInput)}`;
    if (seen.has(key)) {
      return key;
    }
    seen.add(key);
  }

  return null;
}

const repeated = detectRepeatedCalls(result.intermediateSteps);

if (repeated) {
    console.error("REPEATED TOOL CALL DETECTED:", repeated);
}
  1. Now turn on streaming events so you can see each agent decision as it happens. This is the fastest way to debug loops because you can watch tool selection, tool output, and repeated reasoning without waiting for the final timeout.
const stream = await executor.stream(
  { input: "What's the weather in Nairobi?" },
);

for await (const chunk of stream) {
  console.log(JSON.stringify(chunk, null, 2));
}
  1. Finally, tighten the prompt so the agent knows when to stop. In practice, many loops happen because the model is never told to produce a final answer after one successful tool call.
const saferPrompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    [
      "You are a helpful assistant.",
      "Call a tool at most once per user request unless new information is required.",
      "After you get enough information, respond with a final answer immediately.",
      "Do not repeat the same tool call with identical arguments.",
    ].join(" "),
  ],
  ["human", "{input}"],
  new MessagesPlaceholder("agent_scratchpad"),
]);

const saferAgent = await createOpenAIToolsAgent({
  llm: model,
  tools: [weatherTool],
  prompt: saferPrompt,
});

const saferExecutor = new AgentExecutor({
  agent: saferAgent,
  tools: [weatherTool],
  maxIterations: 2,
});

Testing It

Run the script with a query that should require exactly one tool call, such as “What’s the weather in Nairobi?” You should see one get_weather call and then a final answer.

If you intentionally weaken the prompt by removing the “call a tool at most once” instruction, you’ll often see repeated intermediate steps or hit maxIterations. That confirms your loop detector and iteration cap are working.

Also test a question that does not require any tools. The agent should answer directly without entering a tool loop at all.

Next Steps

  • Add structured tracing with LangSmith so you can inspect every prompt and tool call across runs.
  • Wrap your tools with validation and timeouts so bad tool responses do not trigger retry loops.
  • Move from single-tool debugging to multi-tool routing and compare loop behavior across agents.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides