LangGraph Tutorial (Python): handling async tools for beginners
This tutorial shows how to wire async tools into a LangGraph agent in Python and keep the whole flow runnable end to end. You need this when your tool calls hit I/O-bound systems like HTTP APIs, databases, or internal services and you don’t want to block the event loop.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •
openaiAPI key set asOPENAI_API_KEY - •A terminal with
pip
Install the packages:
pip install langgraph langchain-core langchain-openai openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start by defining an async tool. In LangGraph, tools can be regular functions or
async deffunctions, and for anything that waits on network calls, async is the right choice.
import asyncio
from langchain_core.tools import tool
@tool
async def get_account_balance(account_id: str) -> str:
"""Fetch a mock account balance for an account."""
await asyncio.sleep(1)
return f"Account {account_id} has a balance of $12,450.75"
- •Create a chat model and bind the tool to it. This lets the model decide when to call the tool, while LangGraph handles the execution flow.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools([get_account_balance])
- •Build a simple agent node that calls the model and returns messages. The key detail is that the node itself is async, because the model call is async and may trigger async tool execution later in the graph.
from typing import Annotated, TypedDict
from langchain_core.messages import AnyMessage
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
async def agent_node(state: State):
response = await llm_with_tools.ainvoke(state["messages"])
return {"messages": [response]}
- •Add a tool node and connect the graph with a conditional edge. This is the standard LangGraph pattern for tool calling: if the assistant asks for a tool, route to the tool node; otherwise stop.
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode([get_account_balance])
builder = StateGraph(State)
builder.add_node("agent", agent_node)
builder.add_node("tools", tool_node)
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", tools_condition)
builder.add_edge("tools", "agent")
graph = builder.compile()
- •Run the graph with an async entry point. Use a user message that clearly asks for data from the tool so you can see the full loop: model call, tool call, then final answer.
import asyncio
from langchain_core.messages import HumanMessage
async def main():
result = await graph.ainvoke(
{"messages": [HumanMessage(content="What is the balance for account 12345?")]}
)
for message in result["messages"]:
print(f"{message.__class__.__name__}: {message.content}")
if __name__ == "__main__":
asyncio.run(main())
- •If you want to handle multiple async tools, just add them to both
bind_tools()andToolNode. The graph pattern stays the same; only your tool list grows.
@tool
async def get_recent_transactions(account_id: str) -> str:
"""Return mock recent transactions."""
await asyncio.sleep(1)
return f"Account {account_id}: -$42.10 at Grocery Mart, -$18.99 at Fuel Stop"
llm_with_tools = llm.bind_tools([get_account_balance, get_recent_transactions])
tool_node = ToolNode([get_account_balance, get_recent_transactions])
Testing It
Run the script from your terminal and confirm you see at least one assistant message with a tool call followed by a final response after the tool output comes back. If you only see one assistant message and no tool execution, your prompt probably wasn’t specific enough to trigger a tool call.
If you want to verify async behavior, add another await asyncio.sleep() inside a second tool and compare how long each run takes. You should also test with invalid input like an empty account ID so you can see how your own validation logic behaves before this hits production.
Next Steps
- •Add structured outputs with Pydantic models instead of plain strings.
- •Add retries and timeout handling around external API tools.
- •Learn how to persist state with LangGraph checkpointers for multi-turn workflows.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit