How to Fix 'callback not firing during development' in CrewAI (Python)
What this error usually means
If you’re seeing callback not firing during development in CrewAI, the issue is usually not the callback itself. It means your task completed, but the callback hook never executed in the way you expected, most often because the callback was attached to the wrong object or the agent/task wiring is off.
This shows up a lot during local development when people move fast, refactor agents/tasks, or assume a CrewAI Task callback behaves like a LangChain-style callback handler.
The Most Common Cause
The #1 cause is attaching the callback to the wrong place. In CrewAI, task-level callbacks belong on Task, not on Agent, and the function signature has to match what CrewAI passes.
Here’s the broken pattern I see most often:
| Broken | Fixed |
|---|---|
Callback attached to Agent | Callback attached to Task |
| Wrong function signature | Accepts the right task output object |
| Expects callback to fire on agent creation | Fires when task completes |
# BROKEN
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
def my_callback(result):
print("Task finished:", result)
researcher = Agent(
role="Researcher",
goal="Find information",
backstory="You research things.",
tools=[SerperDevTool()],
callback=my_callback, # ❌ Not where you want this
)
task = Task(
description="Research CrewAI callbacks",
expected_output="A summary",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
)
crew.kickoff()
# FIXED
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
def my_callback(task_output):
print("Task finished:", task_output)
researcher = Agent(
role="Researcher",
goal="Find information",
backstory="You research things.",
tools=[SerperDevTool()],
)
task = Task(
description="Research CrewAI callbacks",
expected_output="A summary",
agent=researcher,
callback=my_callback, # ✅ Correct place
)
crew = Crew(
agents=[researcher],
tasks=[task],
)
crew.kickoff()
If you’re using an older example from docs or GitHub issues, check whether it’s referencing a deprecated API shape. CrewAI has changed enough that stale examples are a real source of confusion.
Other Possible Causes
1. Your callback signature does not match what CrewAI passes
If your function expects no args, or too many args, it may fail silently depending on how you’re running it.
# BROKEN
def my_callback():
print("done")
# FIXED
def my_callback(task_output):
print("done:", task_output)
If you want to inspect what gets passed, log the type first:
def my_callback(task_output):
print(type(task_output))
print(task_output)
2. You are using async execution but your callback is blocking or misplaced
When running async flows, make sure you’re not mixing sync assumptions with async kickoff paths.
# BROKEN: sync callback logic inside async flow without checking execution path
result = await crew.kickoff_async()
# FIXED: keep callback simple and non-blocking
def my_callback(task_output):
print("Task output:", task_output)
If you’re doing file writes or network calls inside the callback, keep them short and deterministic during debugging.
3. The task never actually completes
A callback cannot fire if the task errors out before completion. Common symptoms include tool failures, bad prompts, or model errors like:
- •
ValidationError - •
AttributeError: 'NoneType' object has no attribute ... - •tool runtime exceptions
Example:
task = Task(
description="Call internal API and summarize results",
expected_output="Summary",
agent=researcher,
)
If researcher hits a tool exception before finishing, your callback won’t run. Check logs from the tool and model first.
4. You’re expecting callbacks on every intermediate step
CrewAI task callbacks are not always “stream every token” style hooks. They usually fire when a task finishes.
# BROKEN expectation:
# "I want this to run for every agent thought/action"
# FIXED understanding:
# Use Task(callback=...) for final task output only
If you need step-level observability, use tracing/logging around agent execution instead of relying on the task callback alone.
How to Debug It
- •
Confirm where the callback is attached
- •It should be on
Task(callback=...), not only onAgent. - •Search your code for multiple
callback=assignments and remove duplicates.
- •It should be on
- •
Print inside the callback
- •Add a hard
print("callback fired"). - •If that doesn’t show up, the problem is wiring or execution flow, not formatting.
- •Add a hard
- •
Check for exceptions before completion
- •Run with full logs enabled.
- •Look for tool errors, prompt failures, or stack traces before
crew.kickoff()returns.
- •
Reduce to one agent and one task
- •Strip your project down to a minimal repro.
- •Remove tools, memory, delegation, and extra tasks until the callback fires reliably.
A minimal debug version should look like this:
from crewai import Agent, Task, Crew
def debug_callback(output):
print("CALLBACK FIRED")
print(output)
agent = Agent(
role="Tester",
goal="Return one short answer",
backstory="Minimal test agent.",
)
task = Task(
description="Say hello in one sentence.",
expected_output="One sentence greeting",
agent=agent,
callback=debug_callback,
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
If this works but your real project does not, reintroduce complexity one piece at a time.
Prevention
- •Attach callbacks only at the level CrewAI expects for that feature: usually
Task(callback=...). - •Keep callbacks small and side-effect free; push heavy logic into separate functions.
- •Pin your CrewAI version and re-check examples after upgrades; stale snippets break fast.
- •Build a minimal smoke test for each new crew so you catch wiring issues before adding tools and multi-agent logic.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit