How to Fix 'callback not firing' in LangGraph (Python)
What “callback not firing” usually means
In LangGraph, this error usually means your callback handler was registered, but the graph never reached the point where that callback can execute. Most of the time it happens when you expect a node-level event, token callback, or tool callback to fire, but the graph is either not streaming, not invoking the right runnable, or swallowing the event inside an async boundary.
The important detail: LangGraph does not “magically” call every callback you attach. It only emits events when the underlying Runnable, model, or tool execution path actually runs through a supported callback hook.
The Most Common Cause
The #1 cause is using a normal .invoke() call and expecting streaming-style callbacks like on_llm_new_token, on_chain_start, or custom event handlers to fire. In LangGraph, many callbacks only show up when you use .stream() / .astream() with the right config and your nodes are built from callback-aware runnables.
Here’s the broken pattern:
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
class DebugHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs):
print("TOKEN:", token)
llm = ChatOpenAI(model="gpt-4o-mini", streaming=True)
def agent_node(state):
# Broken if you expect token callbacks here during invoke()
return {"messages": [llm.invoke(state["messages"])]}
graph = StateGraph(dict)
graph.add_node("agent", agent_node)
graph.set_entry_point("agent")
graph.add_edge("agent", END)
app = graph.compile()
result = app.invoke(
{"messages": [{"role": "user", "content": "Say hello"}]},
config={"callbacks": [DebugHandler()]},
)
print(result)
And here’s the fixed pattern:
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
class DebugHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs):
print("TOKEN:", token)
llm = ChatOpenAI(model="gpt-4o-mini", streaming=True)
def agent_node(state):
# Right: use stream/astream if you want token-level callback events
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph = StateGraph(dict)
graph.add_node("agent", agent_node)
graph.set_entry_point("agent")
graph.add_edge("agent", END)
app = graph.compile()
for chunk in app.stream(
{"messages": [{"role": "user", "content": "Say hello"}]},
config={"callbacks": [DebugHandler()]},
):
print(chunk)
If you need token-by-token output, don’t hide the model call behind a plain synchronous node and expect invoke() to behave like a streaming transport. Use .stream() and make sure the model itself is configured for streaming.
Other Possible Causes
1) Your callback is attached to the wrong object
If you attach callbacks to the graph but your actual work happens inside a nested runnable or model instance that doesn’t inherit that config properly, nothing fires.
# Broken: callback attached at graph level only
app.invoke(input_data, config={"callbacks": [handler]})
# Better: ensure nested runnables also receive config/callbacks
llm = llm.with_config({"callbacks": [handler]})
In practice, prefer passing callbacks through config at execution time and verify each node forwards config into nested calls when needed.
2) You used sync code where LangGraph expects async behavior
A common failure mode is mixing async def nodes with sync invocation or forgetting to await async graph execution.
# Broken
result = app.invoke(input_data)
# Fixed
result = await app.ainvoke(input_data)
If your node uses asyncio, HTTP clients, or async tools, use ainvoke() / astream() consistently.
3) The node never executes because routing skips it
Sometimes the callback is fine. The node just never runs because conditional edges route around it.
def route(state):
return "end" # Broken if you expected "tool_node" to run
graph.add_conditional_edges("router", route, {
"tool_node": "tool_node",
"end": END,
})
Check your routing logic and confirm the branch actually points to the node you expect. In LangGraph terms, if the edge doesn’t execute, neither will any callback inside that branch.
4) Your tool/function isn’t wrapped as a LangChain runnable/tool
If you call raw Python functions directly inside a node, LangChain callback hooks like on_tool_start and on_tool_end won’t fire.
# Broken: plain Python function call
def tool_node(state):
result = my_plain_python_function(state["query"])
return {"result": result}
Wrap it as a proper tool:
from langchain_core.tools import tool
@tool
def my_tool(query: str) -> str:
return my_plain_python_function(query)
That gives LangChain something it can instrument with standard callback events.
How to Debug It
- •
Confirm whether the node runs at all
- •Add a plain
print("entered node")at the top of each node. - •If that doesn’t print, this is routing or graph wiring — not callbacks.
- •Add a plain
- •
Switch from
invoke()tostream()- •If tokens or intermediate events appear only in stream mode, your issue is execution style.
- •For async graphs use:
async for chunk in app.astream(input_data, config={"callbacks": [handler]}): print(chunk)
- •
Verify callback-compatible objects
- •Check whether your work is done by:
- •
ChatOpenAI - •
Runnable - •
Tool - •plain Python functions
- •
- •Plain functions won’t emit LangChain lifecycle events unless wrapped.
- •Check whether your work is done by:
- •
Inspect routing and state transitions
- •Print state before and after each node.
- •If using conditional edges, log branch decisions explicitly:
def route(state): decision = compute_route(state) print("routing ->", decision) return decision
Prevention
- •Use
.stream()/.astream()whenever you depend on token-level or incremental callbacks. - •Keep all model/tool calls inside LangChain-compatible wrappers instead of raw Python helpers.
- •Add a small integration test that asserts at least one callback fires for each critical node path.
If you’re building production agents on LangGraph, treat callbacks as part of execution semantics, not as logging hooks. When they don’t fire, first check whether the graph actually executed the path you think it did.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit