LangGraph Tutorial (Python): handling async tools for advanced developers
This tutorial shows you how to build a LangGraph agent that can call async tools correctly, without blocking the event loop or mixing sync and async execution paths. You need this when your agent talks to APIs, databases, queues, or internal services that already expose async clients and you want the graph to stay responsive under load.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •
openaiAPI key set asOPENAI_API_KEY - •A terminal with
pip - •Basic familiarity with LangGraph nodes, state, and edges
Install the packages:
pip install langgraph langchain-core langchain-openai
Step-by-Step
- •Start with a typed state and an async tool.
The key detail is that the tool itself is async, and the node that calls it must also be async. If you wrap an async client inside a sync function, you end up fighting the runtime instead of using it properly.
import asyncio
from typing import Annotated, TypedDict
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.tools import tool
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
@tool
async def fetch_exchange_rate(base: str, quote: str) -> str:
await asyncio.sleep(0.2)
rates = {("USD", "EUR"): "0.92", ("EUR", "USD"): "1.09"}
return rates.get((base.upper(), quote.upper()), "1.00")
- •Build an async model node that emits tool calls.
For this example, I use a small rule-based router instead of an LLM so the code runs as-is. In a real app, this node would call your chat model and return an AIMessage with tool calls attached.
from langgraph.graph import StateGraph, START, END
async def assistant_node(state: State):
last = state["messages"][-1]
if isinstance(last, HumanMessage) and "rate" in last.content.lower():
return {
"messages": [
AIMessage(
content="Checking the exchange rate.",
tool_calls=[
{
"name": "fetch_exchange_rate",
"args": {"base": "USD", "quote": "EUR"},
"id": "tool_call_1",
}
],
)
]
}
return {"messages": [AIMessage(content="I only handle exchange-rate requests in this demo.")]}
- •Add an async tool execution node.
This is where many graphs break: the tool runner must await the tool call result and then append a ToolMessage. Keep this node isolated so your graph can scale to multiple tools later.
async def tools_node(state: State):
last = state["messages"][-1]
outputs = []
for call in last.tool_calls:
if call["name"] == "fetch_exchange_rate":
result = await fetch_exchange_rate.ainvoke(call["args"])
outputs.append(
ToolMessage(
content=result,
tool_call_id=call["id"],
)
)
return {"messages": outputs}
- •Wire the graph with conditional routing.
The assistant decides whether to stop or send control to tools. After tools run, control returns to the assistant so it can turn raw tool output into a final answer.
def route_tools(state: State):
last = state["messages"][-1]
if isinstance(last, AIMessage) and getattr(last, "tool_calls", None):
return "tools"
return END
graph = StateGraph(State)
graph.add_node("assistant", assistant_node)
graph.add_node("tools", tools_node)
graph.add_edge(START, "assistant")
graph.add_conditional_edges("assistant", route_tools, {"tools": "tools", END: END})
graph.add_edge("tools", "assistant")
app = graph.compile()
- •Run it with
ainvoke, notinvoke.
If your nodes are async, use the async entrypoint end-to-end. That keeps your execution model consistent and avoids hidden blocking calls when you start adding real network clients.
import asyncio
async def main():
result = await app.ainvoke(
{"messages": [HumanMessage(content="What is the USD to EUR rate?")] }
)
for message in result["messages"]:
print(type(message).__name__, "=>", message.content)
if __name__ == "__main__":
asyncio.run(main())
Testing It
Run the script and confirm you see three message types in order: HumanMessage, AIMessage, and ToolMessage, followed by the final assistant response. If the graph hangs or throws an event-loop error, you probably called a sync function somewhere in the path or used invoke instead of ainvoke.
You should also test a non-tool input like “hello” to confirm the graph exits cleanly without entering the tools node. In production, add logging around each node so you can see which branch executed and how long each awaited call took.
Next Steps
- •Replace the rule-based assistant node with a real chat model using LangChain tool calling.
- •Add parallel async tools and learn how to fan out work with separate nodes.
- •Persist state with checkpoints so long-running conversations survive restarts.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit