How to Fix 'callback not firing' in AutoGen (Python)

By Cyprian AaronsUpdated 2026-04-21
callback-not-firingautogenpython

What “callback not firing” usually means

In AutoGen, this error usually means your handler, reply function, or event callback was registered, but the framework never reached it. Most of the time it happens because the agent never entered the code path you expected, or because the callback signature does not match what AutoGen is trying to invoke.

The symptom is often misleading: the app runs, no exception is thrown, but your custom logic never executes. In logs, you may see normal agent chatter from AssistantAgent, UserProxyAgent, or ConversableAgent, but your callback stays silent.

The Most Common Cause

The #1 cause is registering the callback on the wrong object or using the wrong reply hook for the agent type.

In AutoGen, people often attach a custom reply function to a UserProxyAgent and expect it to fire during assistant responses. But UserProxyAgent only calls its registered replies when it is asked to generate a response. If the conversation flow never routes through that agent, your callback won’t fire.

Broken vs fixed pattern

Broken patternFixed pattern
Callback attached to the wrong agent or wrong hookCallback attached to the agent that actually generates the reply
Signature doesn’t match AutoGen’s expected reply function formatReply function returns (final, reply) correctly
# BROKEN
from autogen import AssistantAgent, UserProxyAgent

assistant = AssistantAgent(name="assistant")
user = UserProxyAgent(name="user")

def my_callback(message):
    print("Callback fired")
    return "done"

# Wrong: this does not wire into assistant response generation
user.register_reply([AssistantAgent], my_callback)

user.initiate_chat(
    assistant,
    message="Run my task"
)
# FIXED
from autogen import AssistantAgent, UserProxyAgent

assistant = AssistantAgent(name="assistant")
user = UserProxyAgent(name="user")

def my_callback(recipient, messages, sender, config):
    print("Callback fired")
    return True, "done"

# Register on the agent that will actually produce the reply path
assistant.register_reply([UserProxyAgent], my_callback)

user.initiate_chat(
    assistant,
    message="Run my task"
)

If you are using older AutoGen examples, pay attention to method names and signatures. In many versions, register_reply() expects a callable with the right parameters and return shape; if it doesn’t match, you may get silent failure or an error like:

  • TypeError: my_callback() takes 1 positional argument but 4 were given
  • ValueError: Reply function must return a tuple of (final, reply)

Other Possible Causes

1) The conversation never reaches the branch that triggers your callback

If you added a conditional tool path or nested group chat flow, your callback may be correct but unreachable.

# Example: callback only fires if this branch is hit
if "refund" in last_user_message.lower():
    assistant.register_reply([UserProxyAgent], refund_handler)

If your input says "chargeback" instead of "refund", nothing fires. This is common in routing logic built around keyword checks.

2) You used an async callback in a sync chat flow

AutoGen won’t always await async functions unless you’re using the async API end-to-end.

# BROKEN
async def my_callback(recipient, messages, sender, config):
    print("async handler")
    return True, "ok"

user.initiate_chat(assistant, message="hello")  # sync flow

Use either:

# FIXED: sync callback for sync chat
def my_callback(recipient, messages, sender, config):
    return True, "ok"

Or switch fully to async chat methods if your version supports them.

3) Your model call fails before callbacks are evaluated

Sometimes what looks like “callback not firing” is actually an upstream LLM failure. Check for errors like:

  • openai.BadRequestError
  • RateLimitError
  • AuthenticationError
  • Model client error

If AssistantAgent cannot get a completion from the model client, your downstream reply hook never runs.

llm_config = {
    "config_list": [{"model": "gpt-4o-mini", "api_key": "..." }],
    "temperature": 0,
}
assistant = AssistantAgent(name="assistant", llm_config=llm_config)

Verify credentials and model name before debugging callbacks.

4) You expected tool execution callbacks but didn’t register tools correctly

If you’re waiting for a function/tool callback in an agent workflow, make sure the tool is exposed and enabled.

def lookup_policy(policy_id: str):
    return {"policy_id": policy_id, "status": "active"}

assistant = AssistantAgent(
    name="assistant",
    llm_config=llm_config,
    tools=[lookup_policy],
)

Without proper tool registration in your AutoGen version, the LLM may mention a tool call in text but never execute it.

How to Debug It

  1. Confirm which agent should fire

    • Print before and after each register_reply() call.
    • Verify whether you attached the handler to AssistantAgent, UserProxyAgent, or another subclass.
  2. Add a hard log at entry

    • Put print("entered callback") as line one.
    • If that never prints, your issue is routing or registration.
    • If it prints and then fails later, it’s signature or return-value related.
  3. Check signature against your installed AutoGen version

    • Run:
      pip show pyautogen autogen-agentchat autogen-core
      
    • Version drift matters here. Callback APIs changed across releases.
    • A mismatch often shows up as:
      • TypeError
      • silent non-execution
      • unexpected tuple unpacking errors
  4. Reduce to one agent pair

    • Strip out group chat managers, routers, and nested agents.
    • Reproduce with only:
      • one AssistantAgent
      • one UserProxyAgent
      • one registered reply function

That tells you whether the bug is in AutoGen wiring or in your orchestration layer.

Prevention

  • Register callbacks on the exact agent and hook that owns that execution path.
  • Keep reply functions synchronous unless you are using async APIs end-to-end.
  • Pin AutoGen versions in production and test callback signatures after upgrades.
  • Log every registration step once at startup so missing hooks show up immediately.

If you’re building production agents for banking or insurance workflows, treat callback registration like dependency wiring: verify it explicitly. Silent failures cost more than exceptions because they look like “the system worked” until a business rule disappears.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides