CrewAI Tutorial (Python): handling async tools for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaihandling-async-tools-for-advanced-developerspython

This tutorial shows you how to wire async tools into a CrewAI project without blocking your agent loop. You need this when your tools call slow external systems like HTTP APIs, databases, queues, or internal services and you want concurrency instead of serial waits.

What You'll Need

  • Python 3.10+
  • crewai
  • crewai-tools
  • python-dotenv
  • An OpenAI API key set as OPENAI_API_KEY
  • Optional: any API key for the external service you want to call
  • A terminal and a virtual environment

Install the packages:

pip install crewai crewai-tools python-dotenv

Step-by-Step

  1. First, define an async tool that does real asynchronous work. CrewAI tools can be built with @tool, and the important part is exposing an async def method that uses await internally.
import asyncio
from crewai.tools import tool

@tool("async_weather_lookup")
async def async_weather_lookup(city: str) -> str:
    await asyncio.sleep(1)
    return f"Weather in {city}: 22C, clear skies"

if __name__ == "__main__":
    print(asyncio.run(async_weather_lookup("Nairobi")))
  1. Next, wrap the async tool in a pattern your agent can use safely. The cleanest approach is to keep the tool itself async, then let CrewAI call it through the agent’s normal tool execution path.
from crewai import Agent, Task, Crew, Process
from crewai.llm import LLM
from crewai.tools import tool
import asyncio

@tool("async_weather_lookup")
async def async_weather_lookup(city: str) -> str:
    await asyncio.sleep(1)
    return f"Weather in {city}: 22C, clear skies"

llm = LLM(model="gpt-4o-mini")

agent = Agent(
    role="Weather analyst",
    goal="Summarize city weather quickly",
    backstory="You work with live data sources.",
    llm=llm,
    tools=[async_weather_lookup],
)
  1. Then create a task that forces the agent to use the tool. Keep the prompt explicit so you can verify the async path is actually being exercised instead of getting a generic model answer.
task = Task(
    description="Use the async_weather_lookup tool for Nairobi and summarize the result in one sentence.",
    expected_output="A one-sentence weather summary for Nairobi.",
    agent=agent,
)

crew = Crew(
    agents=[agent],
    tasks=[task],
    process=Process.sequential,
)
  1. Now run the crew from a synchronous entrypoint. If you are inside a normal Python script, this is enough; CrewAI handles the tool invocation while your async tool awaits its own I/O.
if __name__ == "__main__":
    result = crew.kickoff()
    print(result)
  1. If you need to call async tools directly outside CrewAI, use asyncio.run(). This is useful for testing the tool in isolation before plugging it into an agent workflow.
import asyncio

async def main():
    value = await async_weather_lookup("Lagos")
    print(value)

if __name__ == "__main__":
    asyncio.run(main())

Testing It

Run the script and confirm two things: the process completes without event-loop errors, and the output includes your tool result rather than only model-generated text. If you see something like Weather in Nairobi: 22C, clear skies, your async tool executed correctly.

For deeper verification, add timing around the tool call and compare it to a synchronous version. You should see that multiple async operations can be awaited without blocking each other if you later expand this pattern to concurrent fan-out.

Next Steps

  • Add real I/O inside the tool, such as httpx.AsyncClient for API calls or asyncpg for database reads.
  • Build a multi-tool agent where one task fans out across several async tools and aggregates results.
  • Learn how to manage retries and timeouts inside async tools so failures do not stall your crew.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides