How to Fix 'agent infinite loop' in LangGraph (Python)
What the error means
agent infinite loop in LangGraph usually means your graph kept routing back into the agent node without ever reaching a terminal condition. In practice, this shows up when the model keeps requesting tools, the graph keeps re-entering agent, and no edge ever sends execution to END.
You’ll typically hit this when building ReAct-style agents, tool-calling loops, or custom conditional edges that never stop on a final answer.
The Most Common Cause
The #1 cause is a bad conditional routing function that always sends execution back to the agent, even after the model has already produced a final response.
In LangGraph terms, you usually have a StateGraph, an agent node, a tools node, and a conditional edge driven by something like should_continue. If that function never returns END, you get repeated execution until LangGraph raises a recursion-style failure such as:
- •
GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition - •or an apparent “agent infinite loop” depending on your wrapper/logging
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
Always routes back to agent | Routes to tools only when tool calls exist |
| No stop condition for final answers | Returns END when the assistant message has no tool calls |
| Common in custom ReAct graphs | Standard LangGraph control flow |
# BROKEN
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
from langchain_core.messages import BaseMessage
class State(TypedDict):
messages: List[BaseMessage]
def should_continue(state: State):
# Wrong: always loops back
return "agent"
builder = StateGraph(State)
builder.add_node("agent", agent_node)
builder.add_node("tools", tool_node)
builder.set_entry_point("agent")
builder.add_conditional_edges("agent", should_continue, {
"agent": "agent",
"tools": "tools",
END: END,
})
# FIXED
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
from langchain_core.messages import AIMessage, BaseMessage
class State(TypedDict):
messages: List[BaseMessage]
def should_continue(state: State):
last_message = state["messages"][-1]
# If the model asked for tools, go run them.
if isinstance(last_message, AIMessage) and getattr(last_message, "tool_calls", None):
return "tools"
# Otherwise stop.
return END
builder = StateGraph(State)
builder.add_node("agent", agent_node)
builder.add_node("tools", tool_node)
builder.set_entry_point("agent")
builder.add_conditional_edges("agent", should_continue, {
"tools": "tools",
END: END,
})
The important part is that your route function must inspect the last assistant message and terminate when there are no tool calls left.
Other Possible Causes
1. Tool output never updates state correctly
If your tool node runs but doesn’t append the tool result back into messages, the agent sees the same context again and asks for the same tool forever.
# Broken: tool result is not written back into state
def tool_node(state):
result = search_tool.invoke(state["messages"][-1].tool_calls[0]["args"])
return {} # nothing added to messages
# Fixed: append ToolMessage to state
from langchain_core.messages import ToolMessage
def tool_node(state):
call = state["messages"][-1].tool_calls[0]
result = search_tool.invoke(call["args"])
return {
"messages": [ToolMessage(content=str(result), tool_call_id=call["id"])]
}
2. The model keeps calling tools because your prompt is too vague
If you don’t explicitly tell the model when to stop, some models will keep trying to “help” by calling tools again.
system_prompt = """
You are an assistant. Use tools when needed.
"""
Use a stricter instruction:
system_prompt = """
Use tools only when necessary.
After receiving a tool result, answer directly unless another tool call is required.
Do not call tools repeatedly for the same question.
"""
3. Your conditional edge checks the wrong field
A common bug is checking for "tool_calls" on a dict or raw message object incorrectly. In LangChain/LangGraph, assistant messages are often AIMessage objects.
# Broken
def should_continue(state):
if state["messages"][-1]["tool_calls"]:
return "tools"
return END
# Fixed
from langchain_core.messages import AIMessage
def should_continue(state):
last = state["messages"][-1]
if isinstance(last, AIMessage) and last.tool_calls:
return "tools"
return END
4. You set an overly high recursion limit and masked the real bug
LangGraph will eventually stop with recursion protection. If you increase it too much during debugging, you can hide an obvious loop.
app.invoke(
{"messages": [...]},
config={"recursion_limit": 100}
)
Keep it low while debugging so failures surface quickly:
app.invoke(
{"messages": [...]},
config={"recursion_limit": 10}
)
How to Debug It
- •
Print every state transition Add logging inside your routing function and nodes. You want to see exactly which node runs and what the last message contains.
def should_continue(state): last = state["messages"][-1] print("LAST MESSAGE:", type(last), getattr(last, "content", None)) print("TOOL CALLS:", getattr(last, "tool_calls", None)) ... - •
Inspect whether tool results are appended If the graph goes
agent -> tools -> agent -> tools, check whether the tool node returns a newToolMessage. No new message usually means repeated calls. - •
Check your termination branch Confirm that your conditional edge can actually return
END. If every path maps to"agent"or"tools", you built an endless cycle. - •
Lower
recursion_limitand reproduce fast Run withconfig={"recursion_limit": 5}so you hit the failure immediately instead of waiting through dozens of useless turns.
Prevention
- •Always make your router inspect the last assistant message and explicitly return
ENDwhen there are no pending tool calls. - •Return proper
ToolMessageobjects from tools; don’t just log results or mutate local variables. - •Keep prompts strict about stopping after tool execution; models are good at looping if you leave behavior underspecified.
If you’re using LangGraph’s prebuilt agents like create_react_agent, still verify your tools and message flow. The framework handles most of the plumbing, but bad state handling or malformed tool outputs can still produce what looks like an infinite loop.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit