AutoGen Tutorial (Python): handling async tools for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
autogenhandling-async-tools-for-intermediate-developerspython

This tutorial shows you how to wire async Python tools into AutoGen agents without blocking the event loop or fighting the framework. You need this when your agent calls APIs, databases, or internal services that already expose async def functions and you want clean concurrency instead of wrapping everything in sync hacks.

What You'll Need

  • Python 3.10+
  • autogen-agentchat
  • autogen-ext
  • openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with AutoGen agents, messages, and tool registration
  • A terminal that can run python and install packages with pip

Step-by-Step

  1. Install the packages and set up your environment.

    The important bit here is using the newer AutoGen split packages. autogen-agentchat gives you the agent runtime, and autogen-ext provides OpenAI model clients and tool helpers.

pip install -U autogen-agentchat autogen-ext openai
export OPENAI_API_KEY="your-api-key"
  1. Define an async tool that does real work.

    Keep the tool pure and explicit: accept typed inputs, return a string or structured data, and use async def. In production, this is where you would call an HTTP API, query a database driver that supports asyncio, or hit an internal service.

import asyncio
from typing import Annotated

from autogen_core.tools import FunctionTool


async def fetch_customer_status(customer_id: str) -> str:
    await asyncio.sleep(1)
    return f"Customer {customer_id} is active and eligible for support."


customer_status_tool = FunctionTool(
    fetch_customer_status,
    name="fetch_customer_status",
    description="Fetch the current status for a customer by ID.",
)
  1. Create an assistant agent that can call tools.

    The key setting is enabling tool usage through the model client configuration. With AutoGen, the agent decides when to call the tool, and your async function runs under the hood without blocking your app.

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient


model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

agent = AssistantAgent(
    name="support_agent",
    model_client=model_client,
    tools=[customer_status_tool],
    system_message=(
        "You are a support agent. "
        "Use tools when you need customer status before answering."
    ),
)
  1. Run the agent from an async entrypoint.

    This is where people usually get tripped up: if your tool is async, your top-level orchestration should also be async. Use asyncio.run() once at the edge of your program, not inside helper functions.

import asyncio

from autogen_agentchat.messages import TextMessage


async def main() -> None:
    result = await agent.run(
        task=TextMessage(content="Check customer C123 and tell me if they are eligible."),
    )

    print(result.messages[-1].content)


if __name__ == "__main__":
    asyncio.run(main())
  1. Handle multiple async tool calls without serializing everything.

    If your workflow needs more than one external lookup, keep those lookups inside async tools so they can run concurrently. This pattern matters when one user request fans out to several service calls.

import asyncio


async def fetch_account_balance(customer_id: str) -> str:
    await asyncio.sleep(1)
    return f"Balance for {customer_id}: $240.18"


async def fetch_open_tickets(customer_id: str) -> str:
    await asyncio.sleep(1)
    return f"{customer_id} has 2 open tickets"


async def gather_customer_context(customer_id: str) -> dict[str, str]:
    balance_task = fetch_account_balance(customer_id)
    tickets_task = fetch_open_tickets(customer_id)

    balance, tickets = await asyncio.gather(balance_task, tickets_task)
    return {"balance": balance, "tickets": tickets}
  1. Register a second tool when you need richer responses.

    For intermediate agents, returning structured context is better than stuffing everything into one long string. AutoGen will pass the tool output back into the conversation so the model can reason over it.

from autogen_core.tools import FunctionTool


async def get_customer_context(customer_id: str) -> str:
    context = await gather_customer_context(customer_id)
    return f"{context['balance']}; {context['tickets']}"


customer_context_tool = FunctionTool(
    get_customer_context,
    name="get_customer_context",
    description="Fetch account balance and open ticket count for a customer.",
)

agent = AssistantAgent(
    name="support_agent_v2",
    model_client=model_client,
    tools=[customer_context_tool],
)

Testing It

Run the script with a real API key and ask for something that clearly requires external data retrieval. You should see the agent decide whether to call the tool, then produce a final answer using the returned context.

A good test prompt is something like: “Check customer C123 and summarize their status.” If everything is wired correctly, you’ll see a one-second delay from the async tool and then a response containing the mocked status text.

If you want to verify concurrency, add two independent await asyncio.sleep(1) calls behind separate tools and time the request. Two serial calls should take about two seconds; two concurrent calls with asyncio.gather() should stay close to one second plus model latency.

If it fails, check these first:

  • OPENAI_API_KEY is set in the shell running Python
  • You installed both autogen-agentchat and autogen-ext
  • Your code uses asyncio.run(main()) only once at program entry
  • The tool function is declared with async def, not plain def

Next Steps

  • Add retries and timeout handling around external API calls inside your async tools.
  • Return structured JSON-like objects from tools instead of plain strings when downstream reasoning gets complex.
  • Move from single-agent tool use to multi-agent workflows once you need delegation across support, fraud, or underwriting tasks.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides