LangGraph Tutorial (Python): debugging agent loops for intermediate developers
This tutorial shows you how to instrument a LangGraph agent so you can see why it keeps looping, where state changes, and how to stop runaway execution before it burns tokens. You need this when an agent keeps re-calling the same tool, never reaches a final answer, or behaves differently depending on hidden state.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •An OpenAI API key set as
OPENAI_API_KEY - •Basic familiarity with LangGraph nodes, edges, and state
- •A terminal and a virtual environment
Install the packages:
pip install langgraph langchain-core langchain-openai
Step-by-Step
- •Start with a minimal graph that can loop. The point here is not to build a good agent yet; it is to create a reproducible loop so you can debug it.
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage
class State(TypedDict):
messages: Annotated[list, add_messages]
loop_count: int
def assistant_node(state: State):
count = state.get("loop_count", 0) + 1
return {
"messages": [AIMessage(content=f"Loop #{count}: I am still thinking.")],
"loop_count": count,
}
def route(state: State):
return "assistant" if state.get("loop_count", 0) < 3 else END
builder = StateGraph(State)
builder.add_node("assistant", assistant_node)
builder.add_edge(START, "assistant")
builder.add_conditional_edges("assistant", route)
graph = builder.compile()
- •Run the graph with a seed message and inspect the final state. If your loop is broken, this gives you a baseline before you add tools or model calls.
initial_state = {
"messages": [HumanMessage(content="Why is my agent looping?")],
"loop_count": 0,
}
result = graph.invoke(initial_state)
print("Final loop count:", result["loop_count"])
for msg in result["messages"]:
print(type(msg).__name__, "=>", msg.content)
- •Add explicit debug logging inside every node and router. In production graphs, most loop bugs come from silent state mutation or routing decisions that are correct syntactically but wrong logically.
def assistant_node_debug(state: State):
count = state.get("loop_count", 0) + 1
print(f"[assistant] incoming loop_count={state.get('loop_count', 0)}")
print(f"[assistant] emitting loop_count={count}")
return {
"messages": [AIMessage(content=f"Debug loop #{count}")],
"loop_count": count,
}
def route_debug(state: State):
count = state.get("loop_count", 0)
next_step = "assistant" if count < 3 else END
print(f"[route] loop_count={count} -> {next_step}")
return next_step
builder = StateGraph(State)
builder.add_node("assistant", assistant_node_debug)
builder.add_edge(START, "assistant")
builder.add_conditional_edges("assistant", route_debug)
debug_graph = builder.compile()
- •Use a real LLM node only after the control flow is visible. This version shows how to inspect the model output and stop when the assistant returns a final answer instead of another tool request.
import os
from typing import Literal
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
def llm_node(state: AgentState):
response = llm.invoke(
[SystemMessage(content="You are a concise assistant.")]
+ state["messages"]
)
print("[llm] last response:", response.content)
return {"messages": [response]}
def should_continue(state: AgentState) -> Literal["llm", "__end__"]:
last = state["messages"][-1]
if hasattr(last, "tool_calls") and last.tool_calls:
print("[route] tool call detected -> llm")
return "llm"
print("[route] no tool call -> end")
return "__end__"
- •Add a hard stop for runaway loops. This is the practical guardrail you want in any agent that can recurse through tools or self-reflection steps.
class GuardedState(TypedDict):
messages: Annotated[list, add_messages]
step_count: int
def guarded_llm(state: GuardedState):
step = state.get("step_count", 0) + 1
response = llm.invoke([SystemMessage(content="Answer directly.")]+state["messages"])
print(f"[guarded_llm] step={step}")
return {"messages": [response], "step_count": step}
def stop_after_three(state: GuardedState):
step = state.get("step_count", 0)
return END if step >= 3 else "llm"
guarded_builder = StateGraph(GuardedState)
guarded_builder.add_node("llm", guarded_llm)
guarded_builder.add_edge(START, "llm")
guarded_builder.add_conditional_edges("llm", stop_after_three)
guarded_graph = guarded_builder.compile()
Testing It
Run each graph with a simple prompt and confirm the console logs match the path you expect. For the debug version, you should see loop_count increase exactly once per cycle until the router returns END. For the LLM version, verify whether the last message is an AI response or a tool request; that tells you whether your routing condition is correct.
If an agent still loops, check three things first:
- •The node returns new state keys instead of overwriting them incorrectly.
- •The router reads the updated field from the current state.
- •Your termination condition is based on deterministic data like
step_count, not just model text.
A good debugging habit is to run with very small limits first. If your graph only behaves correctly at high limits, it is usually masking a control-flow bug.
Next Steps
- •Add LangGraph checkpoints so you can replay intermediate states instead of rerunning from scratch.
- •Learn how to structure tool nodes separately from reasoning nodes so routing stays predictable.
- •Add structured output schemas for agent decisions so your routers do not depend on free-form text.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit