How to Fix 'callback not firing when scaling' in CrewAI (Python)

By Cyprian AaronsUpdated 2026-04-21
callback-not-firing-when-scalingcrewaipython

What the error means

When CrewAI says a callback is not firing when scaling, it usually means your task or agent callback works in a single-run setup but stops triggering once you move to multiple agents, parallel tasks, or async execution. In practice, this shows up when you add more workers, switch to kickoff_async(), or reuse the same callback object across runs.

The key thing: this is usually not a CrewAI bug. It’s almost always a wiring problem, lifecycle issue, or a callback signature mismatch.

The Most Common Cause

The #1 cause is passing the callback in the wrong place, or using an object that gets recreated or dropped when tasks scale out.

In CrewAI, callbacks are typically attached to Agent or Task. If you only attach them to one layer, or you define them inside a function that gets garbage-collected / replaced during orchestration, they may appear to work locally and fail under load.

Broken vs fixed pattern

Broken patternFixed pattern
Callback defined inline and attached inconsistentlyCallback defined once and attached explicitly
Works for one task, fails when scaling to multiple tasksSame callback instance reused across all tasks
Assumes crew-level execution will inherit task-level callbacksCallback passed where CrewAI actually invokes it
# BROKEN
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool

def build_crew():
    def on_task_complete(result):
        print("Task finished:", result)

    researcher = Agent(
        role="Researcher",
        goal="Find market data",
        backstory="Senior analyst",
        tools=[SerperDevTool()],
    )

    task1 = Task(
        description="Research competitor pricing",
        agent=researcher,
        callback=on_task_complete,
    )

    task2 = Task(
        description="Summarize findings",
        agent=researcher,
        # callback missing here
    )

    return Crew(agents=[researcher], tasks=[task1, task2])

crew = build_crew()
crew.kickoff()
# FIXED
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool

def on_task_complete(result):
    print("Task finished:", result)

researcher = Agent(
    role="Researcher",
    goal="Find market data",
    backstory="Senior analyst",
    tools=[SerperDevTool()],
)

task1 = Task(
    description="Research competitor pricing",
    agent=researcher,
    callback=on_task_complete,
)

task2 = Task(
    description="Summarize findings",
    agent=researcher,
    callback=on_task_complete,
)

crew = Crew(agents=[researcher], tasks=[task1, task2])
crew.kickoff()

If you’re using parallel execution, this matters even more. Each Task should own its own callback behavior unless you’ve confirmed your version of CrewAI propagates callbacks the way you expect.

Other Possible Causes

1) Wrong callback signature

CrewAI may call your function with an object you didn’t expect. If your callback expects no args but CrewAI passes a TaskOutput, it can fail silently depending on how exceptions are handled.

# BROKEN
def on_complete():
    print("done")

# FIXED
def on_complete(task_output):
    print(task_output.raw)

If you see errors like:

  • TypeError: on_complete() takes 0 positional arguments but 1 was given
  • TypeError: 'TaskOutput' object is not iterable

this is the first place to check.

2) Using async kickoff with sync-only callbacks

If you call kickoff_async() but your callback blocks on sync I/O or depends on event-loop-unfriendly code, it may look like it never fires under scale.

# BROKEN
import requests

def on_complete(output):
    requests.post("https://example.com/hook", json={"text": output.raw})

# FIXED
import httpx

async def on_complete(output):
    async with httpx.AsyncClient() as client:
        await client.post("https://example.com/hook", json={"text": output.raw})

If your flow uses async agents/tasks, keep the entire path consistent. Mixing sync callbacks into async orchestration is a common failure mode.

3) Reusing mutable state across parallel tasks

When multiple tasks finish at once, shared state can race. The callback fires, but your logging or persistence code overwrites itself and makes it look like nothing happened.

# BROKEN
events = []

def on_complete(output):
    events.append(output.raw)  # race-prone in parallel runs

Use a thread-safe queue, database write, or external logger instead.

# FIXED
from queue import Queue

events = Queue()

def on_complete(output):
    events.put(output.raw)

4) Version mismatch between CrewAI and your examples

CrewAI has changed APIs across releases. A callback example from an old blog post may use callback= in a place your installed version doesn’t honor the same way.

Check:

  • pip show crewai
  • pip freeze | grep crewai
  • Your installed docs vs copied example code

A symptom here is code that runs without raising an error but never triggers the expected hook.

How to Debug It

  1. Verify where the callback is attached

    • Check whether it’s set on Task, Agent, or both.
    • Print the constructed objects before kickoff.
    • Confirm every scaled-out task has the same hook if that’s what you want.
  2. Add hard logging inside the callback

    • Don’t just print one line.
    • Log entry/exit plus payload type.
    • Example:
def on_complete(output):
    print("callback entered")
    print(type(output))
    print(getattr(output, "raw", None))
  1. Run one task at a time

    • Disable parallelism.
    • Remove async temporarily.
    • If the callback works in serial mode but fails in scaled mode, you’re dealing with concurrency or propagation.
  2. Catch exceptions inside the callback

    • A failing callback often looks like “not firing” because the exception is swallowed upstream.
    • Wrap it explicitly:
def on_complete(output):
    try:
        print(output.raw)
        # persistence / webhook / DB write here
    except Exception as e:
        print(f"callback failed: {e!r}")

Prevention

  • Attach callbacks explicitly to every Task that needs them. Don’t assume crew-level behavior will cascade.
  • Keep callbacks small and side-effect focused: log, enqueue, persist. Don’t run heavy business logic inside them.
  • Test both serial and scaled execution paths before shipping:
    • single task
    • multiple tasks
    • async kickoff
    • parallel workers

If you want one practical rule: treat callbacks as infrastructure hooks, not application logic. The smaller and more explicit they are, the less likely they are to disappear when your CrewAI setup scales.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides