LangGraph Tutorial (Python): handling async tools for intermediate developers
This tutorial shows you how to build a LangGraph agent that calls async tools correctly, without blocking the event loop or mixing sync and async execution paths. You need this when your tools hit databases, HTTP APIs, or internal services that already expose async def functions.
What You'll Need
- •Python 3.10+
- •
langgraph - •
langchain-core - •
langchain-openai - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with
StateGraph,MessagesState, and tool calling - •Optional:
python-dotenvif you want to load environment variables from a.envfile
Install the packages:
pip install langgraph langchain-core langchain-openai python-dotenv
Step-by-Step
- •Start with a minimal graph state and an async tool. The important part is that the tool itself is declared with
async def, because LangGraph will await it when invoked through the tool node.
import os
import asyncio
from typing import Annotated
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode, tools_condition
@tool
async def fetch_account_balance(account_id: str) -> str:
await asyncio.sleep(0.2)
return f"Account {account_id} balance is $12,450.32"
tools = [fetch_account_balance]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
- •Build the agent node and the tool node separately. The model node decides whether to call a tool, and the tool node executes it asynchronously before returning control to the model.
def agent_node(state: MessagesState):
messages = [
SystemMessage(content="You are a bank support assistant."),
*state["messages"],
]
response = llm.invoke(messages)
return {"messages": [response]}
tool_node = ToolNode(tools)
graph = StateGraph(MessagesState)
graph.add_node("agent", agent_node)
graph.add_node("tools", tool_node)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")
app = graph.compile()
- •Run the graph from an async entrypoint if you want full async behavior end-to-end. This is the cleanest path when your app already uses FastAPI, asyncio workers, or any other event-loop driven runtime.
async def main():
result = await app.ainvoke(
{
"messages": [
HumanMessage(content="What's the balance for account 12345?")
]
}
)
for message in result["messages"]:
print(f"{type(message).__name__}: {message.content}")
if __name__ == "__main__":
asyncio.run(main())
- •If you need multiple async tools, add them to the same list and let LangGraph route tool calls automatically. This is where LangGraph becomes useful: one model node can decide among several tools without you writing custom dispatch logic.
@tool
async def lookup_policy_status(policy_id: str) -> str:
await asyncio.sleep(0.1)
return f"Policy {policy_id} is active and paid through 2026-01-31"
@tool
async def get_claim_status(claim_id: str) -> str:
await asyncio.sleep(0.1)
return f"Claim {claim_id} is under review"
tools = [fetch_account_balance, lookup_policy_status, get_claim_status]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
tool_node = ToolNode(tools)
- •Use the graph in a production-style wrapper that keeps sync and async boundaries explicit. If your app is synchronous, call
invoke; if your app is asynchronous, callainvokeand keep your tools async too.
def run_sync(question: str):
return app.invoke({"messages": [HumanMessage(content=question)]})
async def run_async(question: str):
return await app.ainvoke({"messages": [HumanMessage(content=question)]})
if __name__ == "__main__":
# Sync path for scripts
result = run_sync("Check claim status for claim 7788")
print(result["messages"][-1].content)
Testing It
Run the script with a question that clearly requires a tool call, like “What’s the balance for account 12345?” or “Check claim status for claim 7788.” You should see at least one AI message with a tool call followed by a final answer that includes the tool result.
If nothing happens after the first model response, inspect whether your model supports tool calling and whether bind_tools() was applied to the same list of tools passed into ToolNode. Also confirm that you are using app.ainvoke(...) when calling from an async context.
For debugging, print every message in the returned state and check for ToolMessage entries between AI messages. That tells you LangGraph actually executed the async tool instead of skipping straight to a final response.
Next Steps
- •Add typed state with
TypedDictwhen you need extra fields beyond messages. - •Learn how to use retries and error handling around external API tools.
- •Extend this pattern with memory or checkpoints so long-running conversations survive restarts
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit