CrewAI Tutorial (Python): handling async tools for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaihandling-async-tools-for-beginnerspython

This tutorial shows how to write CrewAI tools that run asynchronously without blocking your agent workflow. You need this when a tool call waits on slow I/O like HTTP requests, database queries, or rate-limited APIs, and you still want your crew to stay responsive.

What You'll Need

  • Python 3.10 or newer
  • crewai
  • crewai-tools
  • litellm
  • An LLM API key set in your environment, for example:
    • OPENAI_API_KEY
  • Basic familiarity with:
    • Agent
    • Task
    • Crew
    • custom tools in CrewAI

Install the packages:

pip install crewai crewai-tools litellm

Step-by-Step

  1. Start with a custom async tool.

CrewAI tools can be async, but the key is to implement an async method that does real awaited work. For beginners, the easiest pattern is subclassing BaseTool and defining _run as an async function.

import asyncio
from crewai_tools import BaseTool
from pydantic import Field


class AsyncWeatherTool(BaseTool):
    name: str = "Async Weather Tool"
    description: str = "Returns a fake weather report after an async delay."

    city: str = Field(default="unknown")

    async def _run(self, city: str) -> str:
        await asyncio.sleep(2)
        return f"Weather for {city}: 22C, clear skies"
  1. Add a sync wrapper if you want to test the tool directly.

CrewAI can handle async tools in agent execution, but direct testing is easier if you also know how to call them from plain Python. This wrapper uses asyncio.run, which is fine for local scripts.

import asyncio


async def test_tool():
    tool = AsyncWeatherTool()
    result = await tool._run("Nairobi")
    print(result)


if __name__ == "__main__":
    asyncio.run(test_tool())
  1. Wire the tool into a CrewAI agent.

The agent does not need special async configuration. You pass the tool in the tools list, and CrewAI will call it when the model decides to use it.

from crewai import Agent

weather_agent = Agent(
    role="Weather Assistant",
    goal="Answer weather questions using tools",
    backstory="You are precise and helpful.",
    tools=[AsyncWeatherTool()],
    verbose=True,
)
  1. Create a task that forces the agent to use the tool.

Keep the task narrow so it has one job: ask for a weather lookup. That makes it obvious whether the async tool path is working or whether the model is hallucinating an answer.

from crewai import Task

weather_task = Task(
    description="Get the weather for Nairobi using the available tool.",
    expected_output="A short weather summary for Nairobi.",
    agent=weather_agent,
)
  1. Run the crew and wait for completion.

CrewAI handles the orchestration. If your tool is truly async, the crew will await it instead of blocking on a synchronous call path.

from crewai import Crew, Process

crew = Crew(
    agents=[weather_agent],
    tasks=[weather_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print("\nFinal Result:\n", result)
  1. Use multiple async tools when you need parallel external calls.

For beginners, this is where async matters most: fetching from several slow services without turning your agent into a serial bottleneck. Even if CrewAI executes tasks sequentially here, each individual tool can still be async and wait efficiently on I/O.

import asyncio
from crewai_tools import BaseTool


class AsyncNewsTool(BaseTool):
    name: str = "Async News Tool"
    description: str = "Returns a fake news headline after an async delay."

    async def _run(self, topic: str) -> str:
        await asyncio.sleep(1)
        return f"Latest headline about {topic}: market sentiment remains stable"


async def run_tools():
    weather = AsyncWeatherTool()
    news = AsyncNewsTool()

    weather_result, news_result = await asyncio.gather(
        weather._run("Nairobi"),
        news._run("banking"),
    )

    print(weather_result)
    print(news_result)


if __name__ == "__main__":
    asyncio.run(run_tools())

Testing It

Run the script once with your API key set in the environment and watch the verbose logs. You should see CrewAI select the tool and then wait for the async response before producing its final answer.

If you used asyncio.sleep, expect a visible delay of around 2 seconds from the weather tool and 1 second from the news tool when tested directly. That delay proves your code is actually awaiting instead of returning immediately.

If CrewAI returns an answer without calling the tool, tighten the task wording so it explicitly requires tool usage. Also confirm that your model provider is configured correctly through litellm and that OPENAI_API_KEY is present.

Next Steps

  • Learn how to wrap real HTTP APIs inside async tools using httpx.AsyncClient
  • Add retries and timeouts so slow external systems do not stall your agents
  • Move from sequential crews to more advanced multi-agent patterns where different agents own different async integrations

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides