LangChain Tutorial (Python): handling async tools for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainhandling-async-tools-for-beginnerspython

This tutorial shows you how to build a LangChain agent in Python that can call async tools correctly, without blocking your event loop or mixing sync and async code incorrectly. You need this when your tools hit APIs, databases, or internal services that already use asyncio, because the wrong setup will either fail at runtime or waste concurrency.

What You'll Need

  • Python 3.10+
  • langchain
  • langchain-openai
  • openai API key
  • An OpenAI-compatible model that supports tool calling
  • Basic familiarity with asyncio and LangChain agents

Install the packages:

pip install langchain langchain-openai openai

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with an async tool. The key detail is that the tool itself must be declared with async def, and the body should use non-blocking I/O patterns.
import asyncio
from typing import Annotated

from langchain_core.tools import tool

@tool
async def get_account_balance(account_id: str) -> str:
    """Fetch the balance for a bank account."""
    await asyncio.sleep(1)
    return f"Account {account_id} has a balance of $12,450.00"
  1. Build a chat model that supports tool calling. Then bind the tool to the model so LangChain can decide when to call it.
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools([get_account_balance])
  1. Create a simple async agent loop using LangGraph-style message handling from LangChain’s core messages. This is the cleanest beginner-friendly pattern because you control the execution flow and can inspect every step.
import asyncio
from langchain_core.messages import HumanMessage, ToolMessage

async def main():
    messages = [HumanMessage(content="What is the balance for account 12345?")]

    ai_msg = await llm_with_tools.ainvoke(messages)
    messages.append(ai_msg)

    for tool_call in ai_msg.tool_calls:
        if tool_call["name"] == "get_account_balance":
            result = await get_account_balance.ainvoke(tool_call["args"])
            messages.append(
                ToolMessage(
                    content=result,
                    tool_call_id=tool_call["id"],
                )
            )

    final_msg = await llm_with_tools.ainvoke(messages)
    print(final_msg.content)

if __name__ == "__main__":
    asyncio.run(main())
  1. If you want multiple async tools, keep them separate and let the model choose between them. This is where async starts paying off, because each tool can do network work without blocking other coroutines.
@tool
async def lookup_customer_status(customer_id: str) -> str:
    """Check whether a customer is active."""
    await asyncio.sleep(1)
    return f"Customer {customer_id} is active"

@tool
async def get_policy_number(customer_id: str) -> str:
    """Return the policy number for a customer."""
    await asyncio.sleep(1)
    return f"Policy for customer {customer_id}: POL-88421"

llm_with_more_tools = llm.bind_tools([get_account_balance, lookup_customer_status, get_policy_number])
  1. Run multiple tool calls concurrently when the model requests more than one. This is important for performance, especially when tools are independent API calls.
import asyncio
from langchain_core.messages import HumanMessage, ToolMessage

async def run_tools(tool_calls):
    tasks = []
    for call in tool_calls:
        if call["name"] == "get_account_balance":
            tasks.append(get_account_balance.ainvoke(call["args"]))
        elif call["name"] == "lookup_customer_status":
            tasks.append(lookup_customer_status.ainvoke(call["args"]))
        elif call["name"] == "get_policy_number":
            tasks.append(get_policy_number.ainvoke(call["args"]))
    return await asyncio.gather(*tasks)

async def demo():
    messages = [HumanMessage(content="Check account 12345 and customer status for C-99.")]
    ai_msg = await llm_with_more_tools.ainvoke(messages)
    results = await run_tools(ai_msg.tool_calls)

    for call, result in zip(ai_msg.tool_calls, results):
        messages.append(ToolMessage(content=result, tool_call_id=call["id"]))

    final_msg = await llm_with_more_tools.ainvoke(messages)
    print(final_msg.content)

if __name__ == "__main__":
    asyncio.run(demo())

Testing It

Run the first script and confirm you get a natural-language response that includes the balance value returned by the async tool. If you see an error about missing event loops or coroutine objects not being awaited, your tool or entry point is still being treated like sync code.

Next, test with two independent tools in one prompt and verify both are called and both results come back before the final response. If you want to validate concurrency, add timestamps around each await asyncio.sleep(1) call and confirm total runtime stays close to one second instead of two.

Also test failure paths by raising an exception inside one async tool and checking how your app handles it before sending anything back to the user. In production, wrap each external call with timeouts and retries so one slow dependency does not stall the whole agent.

Next Steps

  • Learn how to use Runnable pipelines with .ainvoke() and .abatch() for cleaner async orchestration.
  • Add timeout handling with asyncio.wait_for() around external API calls.
  • Move from manual message loops to LangGraph when your agent needs branching, retries, or stateful workflows.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides