LangChain Tutorial (Python): handling async tools for intermediate developers
This tutorial shows you how to build a LangChain agent that can call async tools correctly in Python, without blocking the event loop or mixing sync and async execution paths. You need this when your tools hit external APIs, databases, queues, or internal services that already expose async clients.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
openaiAPI key - •
python-dotenvfor local env loading - •Basic familiarity with LangChain tools and agents
- •A terminal and a virtual environment
Install the packages:
pip install langchain langchain-openai python-dotenv
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start with a small async tool that simulates an I/O-bound call. In production this would be an HTTP request, database query, or service call; here we use
asyncio.sleepso the pattern is obvious.
import asyncio
from typing import Annotated
from langchain_core.tools import tool
@tool
async def get_account_status(account_id: Annotated[str, "Customer account ID"]) -> str:
"""Fetch account status from an async backend."""
await asyncio.sleep(1)
return f"Account {account_id} is active and in good standing."
- •Build a chat model and bind the tool to it. The important part is that LangChain knows this tool can be awaited, so the agent can run it inside its async execution path.
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
tools = [get_account_status]
llm_with_tools = llm.bind_tools(tools)
- •Create a simple agent loop using LangGraph’s prebuilt tool-calling agent. This is the cleanest way to handle async tools because the runtime will call your coroutine tool without you manually juggling event loops.
import asyncio
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm_with_tools, tools)
async def main():
result = await agent.ainvoke(
{"messages": [("user", "Check account 12345 status")] }
)
print(result["messages"][-1].content)
if __name__ == "__main__":
asyncio.run(main())
- •Add a second async tool so you can see parallel I/O behavior in a real workflow. This is where async starts paying off: multiple external calls can be awaited instead of blocking one after another.
from typing import Annotated
@tool
async def get_recent_payments(account_id: Annotated[str, "Customer account ID"]) -> str:
"""Fetch recent payment activity from an async backend."""
await asyncio.sleep(1)
return f"Account {account_id} has 2 payments in the last 30 days."
tools = [get_account_status, get_recent_payments]
llm_with_tools = llm.bind_tools(tools)
agent = create_react_agent(llm_with_tools, tools)
- •Invoke the agent with a prompt that requires both tools. The model should decide when to call each tool, and the runtime will await them correctly through
.ainvoke().
async def main():
result = await agent.ainvoke(
{
"messages": [
(
"user",
"Give me a short summary for account 12345 including status and recent payments.",
)
]
}
)
for message in result["messages"]:
if hasattr(message, "content") and message.content:
print(f"{message.type}: {message.content}")
if __name__ == "__main__":
asyncio.run(main())
Testing It
Run the script from your terminal and confirm you get a final assistant response after the tool calls complete. If you want to verify the async path specifically, increase the sleep time in both tools and watch that the app still completes normally under asyncio.run(). If you accidentally switch to invoke() on an async-only path, you’ll usually see event-loop or coroutine-related errors, which is exactly what this tutorial avoids.
A good sanity check is to add logging inside each tool and confirm they are called only when needed by the model. In production, replace sleep with real async clients like httpx.AsyncClient, asyncpg, or an SDK that exposes coroutine methods.
Next Steps
- •Learn how to add retries and timeouts around async tools with
tenacityor native client settings. - •Move from single-tool examples to multi-step workflows with LangGraph state.
- •Add structured outputs so your agent returns JSON instead of free-form text for downstream systems.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit