How to Fix 'callback not firing during development' in AutoGen (Python)

By Cyprian AaronsUpdated 2026-04-21
callback-not-firing-during-developmentautogenpython

What this error usually means

If you’re seeing “callback not firing during development” in AutoGen, the agent is running, but the event handler or reply hook you expected never gets invoked. In practice, this usually happens when the callback is registered on the wrong object, the wrong event type, or after the conversation already started.

This shows up a lot during local development because AutoGen has a few similar-sounding hooks: register_reply(), register_hook(), custom Agent subclasses, and framework-specific callbacks. Mixing them up gives you a silent failure: no exception, just no callback.

The Most Common Cause

The #1 cause is registering the callback on the wrong agent instance or using the wrong AutoGen API for the behavior you want.

In AutoGen Python, callbacks are often attached to a specific ConversableAgent. If you register on one instance and send messages through another, nothing fires. Same thing if you expect register_reply() to behave like a global listener — it does not.

Broken vs fixed

Broken patternFixed pattern
Callback registered on the wrong instanceCallback registered on the exact agent handling replies
Uses register_reply() but expects message interception everywhereUses the correct agent and reply function signature
Conversation starts before registrationRegistration happens before initiate_chat()
# BROKEN
from autogen import ConversableAgent

assistant = ConversableAgent(
    name="assistant",
    llm_config={"config_list": [{"model": "gpt-4o-mini", "api_key": "YOUR_KEY"}]},
)

user_proxy = ConversableAgent(name="user_proxy")

def my_callback(recipient, messages=None, sender=None, config=None):
    print("Callback fired")

# Registered on assistant...
assistant.register_reply([ConversableAgent], my_callback)

# ...but chat is initiated from user_proxy and expected to trigger elsewhere
user_proxy.initiate_chat(assistant, message="Hello")
# FIXED
from autogen import ConversableAgent

assistant = ConversableAgent(
    name="assistant",
    llm_config={"config_list": [{"model": "gpt-4o-mini", "api_key": "YOUR_KEY"}]},
)

user_proxy = ConversableAgent(name="user_proxy")

def my_callback(recipient, messages=None, sender=None, config=None):
    print("Callback fired")
    return True, None  # keep AutoGen's reply pipeline happy

# Register before starting chat
assistant.register_reply([ConversableAgent], my_callback)

# Start conversation with the correct sender/recipient pair
user_proxy.initiate_chat(assistant, message="Hello")

A few things matter here:

  • The callback must match AutoGen’s expected signature for that hook.
  • You need to register it on the agent that will actually process the message.
  • In many cases you need to return a valid tuple or value so AutoGen continues execution correctly.

If your code prints nothing and there’s no stack trace, this is usually where I’d start.

Other Possible Causes

1) You registered after the chat already started

AutoGen won’t retroactively attach callbacks to an active conversation.

# BAD
chat_result = user_proxy.initiate_chat(assistant, message="Run task")
assistant.register_reply([ConversableAgent], my_callback)  # too late
# GOOD
assistant.register_reply([ConversableAgent], my_callback)
chat_result = user_proxy.initiate_chat(assistant, message="Run task")

2) Your callback signature does not match what AutoGen expects

A common symptom is no callback invocation or an internal failure that gets swallowed by your surrounding code.

# BAD: wrong signature for a reply hook
def my_callback(message):
    print(message)
# GOOD: use the expected parameters for the hook you're registering
def my_callback(recipient, messages=None, sender=None, config=None):
    print("Callback fired:", sender.name if sender else None)
    return True, None

If you’re using a custom hook or extension point, check whether it expects:

  • recipient
  • messages
  • sender
  • config

Those names matter more than people think.

3) You’re using async code but not awaiting it

If your project uses a_initiate_chat() or async handlers and you forget await, nothing runs as expected.

# BAD
async def main():
    user_proxy.a_initiate_chat(assistant, message="Hello")  # missing await
# GOOD
async def main():
    await user_proxy.a_initiate_chat(assistant, message="Hello")

Also make sure your callback itself matches the async path if your integration requires it.

4) Human input mode blocks execution during development

If you’re using UserProxyAgent with human input enabled, AutoGen may wait for input instead of reaching your callback path.

from autogen import UserProxyAgent

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="ALWAYS",  # can stall local testing
)

For debugging callbacks:

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
)

This is especially relevant when people say “it works in one run but not in dev.” The code is waiting for stdin instead of moving through the reply chain.

How to Debug It

  1. Confirm which object owns the callback

    • Print both agent names before registration.
    • Make sure registration happens on the same ConversableAgent or AssistantAgent instance that processes replies.
  2. Add a hard print at registration time

    • Don’t assume registration succeeded.
    • Log immediately after calling register_reply() or your hook setup.
assistant.register_reply([ConversableAgent], my_callback)
print("Registered callback on:", assistant.name)
  1. Check whether your flow is sync or async

    • If you call a_initiate_chat(), use await.
    • If your callback depends on async I/O, verify it’s wired into an async-compatible path.
  2. Reduce to one agent pair

    • Strip out group chat managers, nested agents, tool wrappers, and UI layers.
    • Test with only UserProxyAgent + AssistantAgent.
    • If it fires there, your bug is in orchestration code around AutoGen.

Prevention

  • Register hooks before starting any chat session.
  • Keep one small test harness that verifies callbacks fire with plain UserProxyAgent and AssistantAgent.
  • Match each AutoGen API to its intended use:
    • register_reply() for reply generation hooks
    • async methods with explicit await
    • avoid assuming one agent’s registration affects another agent instance

If you treat AutoGen callbacks as instance-bound and order-sensitive, this error stops being mysterious. Most “callback not firing” bugs are just wiring bugs.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides