How to Fix 'agent infinite loop during development' in LangGraph (Python)
What the error means
If you’re seeing agent infinite loop during development in LangGraph, it usually means your graph keeps routing back to the agent node without ever reaching a terminal state. In practice, this shows up when a conditional edge always returns the same branch, or when the model keeps deciding it still needs to act.
You’ll typically hit this while testing a simple agent loop with tools, especially if you forgot a stop condition or your state never changes in a way that lets the graph exit.
The Most Common Cause
The #1 cause is an unconditional self-loop: the agent node always routes back to itself, or your router never returns END.
In LangGraph, this often looks like a StateGraph where the assistant keeps calling tools forever because the tool result never satisfies the model’s stop criteria.
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
Router always returns "agent" | Router returns "tools" only when tool calls exist |
| No explicit end condition | Route to END when no tool call is present |
| State doesn’t record progress | State includes messages and tool results correctly |
# BROKEN: infinite loop
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_core.messages import HumanMessage
class State(TypedDict):
messages: list
def agent_node(state: State):
# model invocation omitted
return {"messages": state["messages"]}
def route(state: State):
return "agent" # always loops forever
graph = StateGraph(State)
graph.add_node("agent", agent_node)
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", route, {"agent": "agent", "end": END})
app = graph.compile()
# FIXED: explicit stop condition
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
from langchain_core.messages import AIMessage
class State(TypedDict):
messages: list
def agent_node(state: State):
# model invocation omitted
return {"messages": state["messages"] + [AIMessage(content="done")]}
def route(state: State) -> Literal["tools", "end"]:
last_msg = state["messages"][-1]
if getattr(last_msg, "tool_calls", None):
return "tools"
return "end"
graph = StateGraph(State)
graph.add_node("agent", agent_node)
graph.add_conditional_edges("agent", route, {"tools": "tools", "end": END})
app = graph.compile()
The important part is that your routing function must be able to return END. If every path points back into the same node, LangGraph will keep executing until it hits its recursion limit and throws an error.
A common runtime symptom is something like:
- •
langgraph.errors.GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition - •
langgraph.errors.InvalidUpdateError - •repeated tool calls with no final assistant response
Other Possible Causes
1. Your tool node never updates state
If the tool executes but doesn’t append its result to messages, the model sees no new information and may call the same tool again.
# broken
def tool_node(state):
result = search_tool.invoke(state["messages"][-1].content)
return {} # nothing changes
# fixed
from langchain_core.messages import ToolMessage
def tool_node(state):
result = search_tool.invoke(state["messages"][-1].content)
return {
"messages": state["messages"] + [ToolMessage(content=str(result), tool_call_id="call_1")]
}
2. The LLM keeps emitting tool calls because you didn’t bind tools correctly
If you use OpenAI-style function calling but forget bind_tools, the message format can get weird and your router may misread the output.
# broken
llm = model # no tools bound
# fixed
llm = model.bind_tools([search_tool, calc_tool])
When tools are not bound, you can end up with plain text responses where your conditional edge expects tool_calls.
3. Your conditional edge checks the wrong field
Some developers check state["tool_calls"] when the actual data lives on the last AI message.
# broken
def route(state):
if state.get("tool_calls"):
return "tools"
return "end"
# fixed
def route(state):
last = state["messages"][-1]
if getattr(last, "tool_calls", []):
return "tools"
return "end"
4. Your recursion limit is masking a logic bug
LangGraph raises recursion-related errors when it cannot find a terminal path. Increasing the limit only delays failure.
# config snippet
app.invoke(
{"messages": [HumanMessage(content="hello")]},
config={"recursion_limit": 50}
)
This is useful for debugging short loops, not for fixing them. If raising the limit just makes the run longer before failing, your graph logic is still wrong.
How to Debug It
- •
Print every node transition
- •Add logs inside each node and router.
- •You want to see exactly which node repeats.
- •
Inspect the last AI message
- •Check whether it contains
tool_calls. - •If it does on every turn, your prompt may be forcing tool use indefinitely.
- •Check whether it contains
- •
Verify state mutation
- •Confirm each node returns updated
messages. - •A node that returns
{}or overwrites state incorrectly can trap the graph.
- •Confirm each node returns updated
- •
Run with a low recursion limit first
- •Use
config={"recursion_limit": 5}. - •If it fails immediately in one branch, you’ve found the loop path faster.
- •Use
Example debug hook:
def route(state):
last = state["messages"][-1]
print("LAST MESSAGE:", last)
print("TOOL CALLS:", getattr(last, "tool_calls", None))
if getattr(last, "tool_calls", None):
return "tools"
return "end"
Prevention
- •Always design one explicit exit path to
END. - •Make routing depend on real state changes, not just “keep thinking” prompts.
- •Test graphs with:
- •one user message,
- •one expected tool call,
- •one final answer.
- •Log transitions during development before adding more nodes.
- •Treat
recursion_limitas a guardrail, not a fix.
If you build LangGraph agents for production systems like banking workflows or claims triage, this matters even more. Infinite loops don’t just waste tokens; they can lock up request workers and create noisy failure modes that are hard to trace after deployment.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit