LlamaIndex Tutorial (Python): handling async tools for intermediate developers
This tutorial shows how to wire async tools into a LlamaIndex agent in Python without blocking your event loop. You need this when your tools call APIs, databases, or internal services that already expose async interfaces and you want the agent to stay responsive under load.
What You'll Need
- •Python 3.10+
- •
llama-index - •An OpenAI API key
- •
nest_asyncioonly if you plan to run this in a notebook - •Basic familiarity with
async def,await, and LlamaIndex agents
Install the packages:
pip install llama-index openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start with an async function that does real work.
LlamaIndex can wrap async callables as tools, but the function itself needs to be truly async. Here we simulate an external service call with asyncio.sleep, which behaves like network latency.
import asyncio
async def get_policy_status(policy_id: str) -> str:
await asyncio.sleep(1)
return f"Policy {policy_id} is active and paid through 2026-01-01."
- •Wrap that async function as a LlamaIndex tool.
Use FunctionTool.from_defaults so the agent can call it by name. The important part is that LlamaIndex will preserve the coroutine behavior instead of forcing it through a sync wrapper.
from llama_index.core.tools import FunctionTool
policy_status_tool = FunctionTool.from_defaults(
fn=get_policy_status,
name="get_policy_status",
description="Get the current status of an insurance policy by policy ID.",
)
- •Build an agent that can call the tool asynchronously.
Use OpenAIAgent.from_tools and then invoke it with achat. That keeps the whole request path async, which matters when you are chaining multiple tool calls or serving requests from FastAPI.
import os
from llama_index.core.agent import OpenAIAgent
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])
agent = OpenAIAgent.from_tools(
tools=[policy_status_tool],
llm=llm,
verbose=True,
)
- •Call the agent from an async entry point.
Do not use .chat() here if you want end-to-end async behavior. Use await agent.achat(...) so your app can keep handling other work while the model and tool execute.
import asyncio
async def main():
response = await agent.achat(
"Check policy 12345 and tell me whether it is active."
)
print(response)
if __name__ == "__main__":
asyncio.run(main())
- •Add a second async tool to prove concurrency works cleanly.
This is where async starts paying off. If one tool fetches policy data and another fetches claims history, both can be awaited without blocking each other inside your service layer.
async def get_claim_count(policy_id: str) -> str:
await asyncio.sleep(1)
return f"Policy {policy_id} has 2 open claims."
claim_count_tool = FunctionTool.from_defaults(
fn=get_claim_count,
name="get_claim_count",
description="Get the number of open claims for a policy.",
)
agent = OpenAIAgent.from_tools(
tools=[policy_status_tool, claim_count_tool],
llm=llm,
verbose=True,
)
Testing It
Run the script and ask for a policy lookup. In verbose mode, you should see the agent choose a tool, execute it, then compose a final answer from the result.
If you added both tools, try a prompt that requires more than one lookup, such as asking for policy status plus claim count. The agent should call both tools as needed without freezing your process.
If you are using this inside FastAPI or another async web framework, put the agent call inside an endpoint and confirm requests remain responsive under concurrent load. A blocked event loop usually means somewhere in your path you accidentally used sync code like .chat() or a synchronous SDK client.
Next Steps
- •Replace the simulated
asyncio.sleepcalls with real database queries or HTTP clients such ashttpx.AsyncClient - •Add structured outputs so downstream services do not have to parse free-form text
- •Move from a single-agent setup to a router or workflow when tool selection becomes more complex
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit