How to Fix 'agent infinite loop during development' in LangChain (Python)
When LangChain says you have an “agent infinite loop during development,” it usually means the agent keeps calling tools or re-planning without ever reaching a final answer. In practice, you’ll see this when an agent executor hits its iteration limit, or when a tool returns something the agent treats as another instruction instead of data.
This is almost always a control-flow problem, not an LLM problem. The agent can’t decide it’s done because your prompt, tool output, or stop condition is pushing it back into the same loop.
The Most Common Cause
The #1 cause is a tool that returns text the agent interprets as another action request, combined with no hard stop on iterations. In LangChain Python, this often shows up with AgentExecutor repeatedly invoking the same tool until you get errors like:
- •
Agent stopped due to iteration limit or time limit. - •
Could not parse LLM output - •
OutputParserException
Here’s the broken pattern and the fixed pattern side by side.
| Broken pattern | Fixed pattern |
|---|---|
| Tool returns agent-like instructions | Tool returns plain data |
No max_iterations guard | Explicit iteration cap |
| Prompt encourages “keep going” behavior | Prompt asks for final answer once enough data exists |
# BROKEN
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import tool
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
@tool
def lookup_policy(query: str) -> str:
# Bad: returning text that looks like an instruction to the agent
return f"Search again with: {query} and then summarize."
tools = [lookup_policy]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
result = agent.invoke({"input": "What does policy X cover?"})
print(result)
# FIXED
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import tool
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
@tool
def lookup_policy(query: str) -> str:
# Good: return only factual data
return "Policy X covers fire damage, theft, and water damage. Excludes flood."
tools = [lookup_policy]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=3,
early_stopping_method="generate",
)
result = agent.invoke({"input": "What does policy X cover?"})
print(result)
The key fix is simple:
- •Tools should return facts, not instructions.
- •The agent should have a hard ceiling with
max_iterations. - •If you need a final answer after hitting the cap, use
early_stopping_method="generate".
Other Possible Causes
1) Your tool schema is ambiguous
If your tool descriptions overlap, the model can keep choosing between similar tools and never settle.
# Risky: overlapping tools with vague descriptions
@tool("search")
def search_docs(q: str) -> str:
"""Search documents."""
@tool("lookup")
def lookup_docs(q: str) -> str:
"""Find relevant docs."""
Fix it by making each tool narrowly scoped and distinct.
@tool("policy_search")
def policy_search(q: str) -> str:
"""Search internal policy PDFs only."""
@tool("claims_lookup")
def claims_lookup(q: str) -> str:
"""Look up claim status by claim ID only."""
2) The model keeps failing to format valid actions
This usually shows up as repeated parser failures in ReAct-style agents.
Common error:
- •
OutputParserException: Could not parse LLM output
If the model emits malformed action blocks, it may re-enter the same step repeatedly.
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
)
That flag can help during development, but if parsing keeps failing, fix the prompt or switch to a structured agent setup.
3) A tool calls back into the same agent
This is a real loop generator. If a tool function invokes the same chain or agent internally, you can create recursion that never exits.
# BAD: recursive agent call inside a tool
@tool
def summarize_with_agent(text: str) -> str:
return agent.invoke({"input": text})["output"]
Fix it by keeping tools atomic. A tool should do one thing and return once.
@tool
def summarize_text(text: str) -> str:
return text[:500]
4) Your chat history keeps reintroducing stale instructions
If you reuse memory incorrectly, old “continue searching” messages can dominate every turn.
Bad config:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(return_messages=True)
If that memory contains prior tool-call noise, clear it between runs during development or scope it per session.
How to Debug It
- •
Turn on verbose tracing
- •Set
verbose=TrueonAgentExecutor. - •Watch whether it keeps selecting the same tool.
- •Look for repeated lines like
Action:andObservation:cycling forever.
- •Set
- •
Inspect raw tool outputs
- •Print exactly what each tool returns.
- •If you see phrases like “call this again” or “continue searching,” that’s your loop source.
- •Tools should return structured facts, not guidance to the model.
- •
Lower iteration limits
- •Set
max_iterations=2or3. - •If it stops with
Agent stopped due to iteration limit, you’ve confirmed runaway planning. - •This also prevents runaway dev sessions while you debug.
- •Set
- •
Test one component at a time
- •Run the LLM without tools.
- •Run each tool standalone.
- •Then wire them together again.
- •If parsing fails only when combined, your prompt/tool contract is broken.
Prevention
- •
Use strict tool contracts:
- •Return plain strings or structured JSON.
- •Never return meta-instructions like “search again” from a tool.
- •
Always set guardrails on agents:
- •
max_iterations - •
early_stopping_method - •timeout limits where appropriate
- •
- •
Keep prompts explicit:
- •Tell the model when to stop.
- •Tell it to answer directly after enough evidence is gathered.
A good production rule: if an agent can call tools, assume it will over-call them unless you constrain both the prompt and execution path. The loop is rarely mysterious once you inspect what each step is actually returning.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit