LangGraph Tutorial (TypeScript): handling async tools for advanced developers
This tutorial shows how to build a LangGraph workflow in TypeScript that can call async tools, wait for their results, and continue the conversation without blocking your agent logic. You need this when your agent has to hit real services like databases, HTTP APIs, queues, or internal SDKs that return promises instead of instant values.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@langchain/langgraph - •
@langchain/core - •
zod - •An LLM API key for a chat model supported by LangChain
- •Optional:
OPENAI_API_KEYif you use OpenAI models
Install the packages:
npm install @langchain/langgraph @langchain/core zod @langchain/openai
Step-by-Step
- •Start with a graph that can route between the model and tools. The important part here is that the tool node can await async work before returning messages back into the graph.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage, HumanMessage, ToolMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getAccountBalance = tool(
async ({ accountId }) => {
await new Promise((r) => setTimeout(r, 300));
return `Account ${accountId} balance is $12,450.33`;
},
{
name: "get_account_balance",
description: "Fetch a bank account balance by account ID",
schema: z.object({
accountId: z.string(),
}),
}
);
const State = Annotation.Root({
messages: Annotation<(HumanMessage | AIMessage | ToolMessage)[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
- •Bind the tool to the model and make the agent node decide whether to call it. This pattern keeps the LLM responsible for reasoning while LangGraph handles execution flow.
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
}).bindTools([getAccountBalance]);
async function agentNode(state: typeof State.State) {
const response = await model.invoke(state.messages);
return { messages: [response] };
}
- •Add a tool executor node that can handle async tools safely. For advanced work, do not assume every tool returns immediately; always await the result and convert it into a
ToolMessage.
async function toolNode(state: typeof State.State) {
const lastMessage = state.messages[state.messages.length - 1];
if (!(lastMessage instanceof AIMessage) || !lastMessage.tool_calls?.length) {
return { messages: [] };
}
const toolMessages = [];
for (const call of lastMessage.tool_calls) {
const result = await getAccountBalance.invoke(call.args);
toolMessages.push(
new ToolMessage({
content: result,
tool_call_id: call.id,
})
);
}
return { messages: toolMessages };
}
- •Wire the graph with conditional routing so the agent loops back after tools finish. This is the part most people miss: after a tool call, you must send control back to the model so it can use the fresh data.
function shouldContinue(state: typeof State.State) {
const lastMessage = state.messages[state.messages.length - 1];
if (lastMessage instanceof AIMessage && lastMessage.tool_calls?.length) {
return "tools";
}
return END;
}
const graph = new StateGraph(State)
.addNode("agent", agentNode)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, {
tools: "tools",
[END]: END,
})
.addEdge("tools", "agent")
.compile();
- •Run it with a user message and inspect the final response. The graph will call the async tool first, then re-enter the model with a
ToolMessagein state.
const result = await graph.invoke({
messages: [new HumanMessage("What is the balance for account A-10291?")],
});
const final = result.messages[result.messages.length - 1];
console.log(final.content);
Testing It
Run this file with ts-node, tsx, or your normal TypeScript build pipeline. If everything is wired correctly, you should see the model ask for the account balance tool, wait for the async result, and then answer with the balance in natural language.
To verify routing works, temporarily add a console.log inside toolNode and confirm it only runs when the model emits a tool call. Also test a prompt that does not require tools; the graph should end after the first agent pass.
If you want stronger validation, mock getAccountBalance.invoke() in a unit test and assert that:
- •
agentNodereturns an AI message with a tool call - •
toolNodereturns oneToolMessage - •the second agent pass consumes that message and produces a final answer
Next Steps
- •Add parallel async tools and execute them with
Promise.all()inside the tool node - •Replace single-tool routing with
ToolNodefrom LangGraph when you need broader multi-tool support - •Add retry logic and timeout handling around external API calls so your agent fails cleanly under load
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit