LangChain Tutorial (Python): debugging agent loops for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchaindebugging-agent-loops-for-advanced-developerspython

This tutorial shows you how to inspect, instrument, and stop runaway LangChain agent loops in Python. You need this when an agent keeps calling tools forever, repeats the same action, or burns tokens without making progress.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • openai API key set as OPENAI_API_KEY
  • A shell with access to run Python scripts
  • Basic familiarity with LangChain agents and tools

Install the packages:

pip install langchain langchain-openai openai

Step-by-Step

  1. Start with a minimal agent that can loop.

Use a real tool and a real chat model so you can reproduce the problem under production-like conditions. The example below creates a calculator tool and an agent executor with a low iteration cap so you can see loop behavior quickly.

import os
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.tools import tool
from langchain import hub

@tool
def add(a: int, b: int) -> int:
    """Add two integers."""
    return a + b

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = hub.pull("hwchase17/react")
tools = [add]
agent = create_react_agent(llm, tools, prompt)

executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=5,
    return_intermediate_steps=True,
)

result = executor.invoke({"input": "What is 2 + 2? Use the tool."})
print(result["output"])
  1. Inspect intermediate steps instead of guessing.

When an agent loops, the fastest path is to look at each action and observation pair. This tells you whether the model is repeating the same tool call, missing state, or failing to produce a final answer.

for i, (action, observation) in enumerate(result["intermediate_steps"], start=1):
    print(f"\nSTEP {i}")
    print("ACTION:")
    print(action)
    print("OBSERVATION:")
    print(observation)
  1. Add explicit loop guards in your executor.

max_iterations is your hard stop, but production debugging needs more than that. Set early_stopping_method="force" so the executor returns when it hits the limit instead of continuing to spin.

executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=3,
    early_stopping_method="force",
    return_intermediate_steps=True,
)

result = executor.invoke({"input": "Keep adding 1 + 1 until you are done."})
print("\nFINAL OUTPUT:")
print(result["output"])
  1. Instrument the chain with callbacks.

Callbacks give you structured visibility into what the model is doing at runtime. For loop debugging, log tool starts and ends so you can detect repeated calls with identical inputs.

from langchain_core.callbacks import BaseCallbackHandler

class DebugHandler(BaseCallbackHandler):
    def on_tool_start(self, serialized, input_str, **kwargs):
        print(f"[tool_start] {serialized.get('name')} -> {input_str}")

    def on_tool_end(self, output, **kwargs):
        print(f"[tool_end] {output}")

executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=False,
    max_iterations=5,
    return_intermediate_steps=True,
    callbacks=[DebugHandler()],
)

result = executor.invoke({"input": "Add 10 and 5."})
print(result["output"])
  1. Detect repeated actions before they become incidents.

In real systems, the same tool call often repeats because the prompt does not tell the model how to stop. A simple dedupe check over intermediate steps lets you fail fast during development and add alerts later in production.

seen = set()

for action, observation in result["intermediate_steps"]:
    key = (action.tool, str(action.tool_input))
    if key in seen:
        raise RuntimeError(f"Repeated tool call detected: {key}")
    seen.add(key)

print("No repeated tool calls detected.")

Testing It

Run the script with a few prompts that should terminate cleanly and one prompt that encourages repetition. You want to confirm that intermediate_steps grows only until max_iterations, then stops deterministically.

If you see repeated identical tool inputs across steps, your issue is usually prompt design or missing stop criteria in the agent instructions. If the model never reaches a final answer even for simple tasks, reduce tool ambiguity and make sure your prompt clearly tells it when to answer directly.

For deeper verification, compare runs with verbose=True, callback logs, and return_intermediate_steps=True. Those three views together will show whether the loop is coming from model reasoning, bad tool outputs, or an executor configuration problem.

Next Steps

  • Add custom stopping logic based on repeated (tool_name, input) pairs.
  • Move from ReAct-style agents to structured tool-calling where your model supports it.
  • Add tracing with LangSmith so you can inspect loops across environments and deployments.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides