LangGraph Tutorial (TypeScript): handling async tools for beginners
This tutorial shows you how to wire async tools into a LangGraph agent in TypeScript without blocking the graph or mixing up tool results. You need this when your agent calls APIs, databases, or internal services that return promises and you want the graph to wait correctly, route tool outputs back into the conversation, and keep the code production-safe.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project with
tsconfig.json - •These packages:
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •
zod - •
dotenv
- •
- •An OpenAI API key in
.env:- •
OPENAI_API_KEY=...
- •
- •Basic familiarity with:
- •LangGraph nodes and edges
- •Tool calling in chat models
- •Async/await in TypeScript
Step-by-Step
- •Start by defining an async tool. This example simulates a slow external service using a promise, which is exactly the kind of function that needs proper async handling in a graph.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
export const getPolicyStatus = tool(
async ({ policyId }: { policyId: string }) => {
await new Promise((resolve) => setTimeout(resolve, 500));
return {
policyId,
status: "active",
updatedAt: new Date().toISOString(),
};
},
{
name: "get_policy_status",
description: "Fetch the current status of an insurance policy",
schema: z.object({
policyId: z.string().min(1),
}),
}
);
- •Create a chat model that can call tools. The important part here is binding the tool to the model so the assistant can decide when to use it.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { getPolicyStatus } from "./tools";
export const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
}).bindTools([getPolicyStatus]);
- •Build a LangGraph workflow with two nodes: one for the assistant and one for executing tools. The assistant node generates tool calls, and the tools node runs them asynchronously before sending results back.
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { model } from "./model";
import { getPolicyStatus } from "./tools";
const tools = new ToolNode([getPolicyStatus]);
async function assistantNode(state: typeof MessagesAnnotation.State) {
const response = await model.invoke(state.messages);
return { messages: [response] };
}
const graph = new StateGraph(MessagesAnnotation)
.addNode("assistant", assistantNode)
.addNode("tools", tools)
.addEdge("__start__", "assistant")
.addConditionalEdges("assistant", (state) => {
const lastMessage = state.messages[state.messages.length - 1];
return lastMessage.tool_calls?.length ? "tools" : "__end__";
})
.addEdge("tools", "assistant")
.compile();
- •Run the graph with a user message that should trigger the tool. The model will ask for policy status, LangGraph will execute the async tool, and then the assistant will respond using the tool output.
import { HumanMessage } from "@langchain/core/messages";
import { graph } from "./graph";
async function main() {
const result = await graph.invoke({
messages: [
new HumanMessage("Check policy P-10293 and tell me if it's active."),
],
});
console.log(result.messages.map((m) => ({
type: m._getType(),
content: m.content,
tool_calls: (m as any).tool_calls,
})));
}
main().catch(console.error);
- •If you want to handle multiple async tools, add them to both
bindTools()andToolNode. LangGraph will run whichever tool calls the model emits, and each tool can be fully asynchronous as long as it returns a promise.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
export const lookupCustomer = tool(
async ({ customerId }: { customerId: string }) => {
await new Promise((resolve) => setTimeout(resolve, 300));
return { customerId, tier: "gold" };
},
{
name: "lookup_customer",
description: "Look up customer account details",
schema: z.object({
customerId: z.string().min(1),
}),
}
);
Testing It
Run the script with npx tsx src/main.ts or your preferred TypeScript runtime. If everything is wired correctly, you should see at least one AI message with a tool_calls array, followed by a tool result message and then a final assistant response.
If the model answers directly without calling the tool, tighten your system prompt so it must use tools for policy checks. If you get an error about missing API keys or unsupported models, confirm .env is loaded before constructing ChatOpenAI.
For debugging, log every message in the returned state. In production, also log tool latency separately so slow external calls are easy to spot.
Next Steps
- •Add retries and timeouts around real API-backed tools
- •Learn how to stream LangGraph events for partial responses
- •Add human approval before executing sensitive tools
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit