LangGraph Tutorial (TypeScript): handling async tools for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphhandling-async-tools-for-intermediate-developerstypescript

This tutorial shows you how to wire async tools into a LangGraph agent in TypeScript without blocking the graph or mangling tool state. You need this when your tools call external APIs, hit databases, or do any I/O that returns a Promise.

What You'll Need

  • Node.js 18+
  • A TypeScript project with "type": "module" or compatible ESM setup
  • Packages:
    • langgraph
    • @langchain/openai
    • @langchain/core
    • zod
    • dotenv
  • An OpenAI API key in .env:
    • OPENAI_API_KEY=...

Step-by-Step

  1. Start with a minimal graph that can call an async tool. The important part is that the tool returns a Promise, and the graph routes tool calls through a dedicated node instead of trying to fake sync execution.
import "dotenv/config";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import {
  Annotation,
  END,
  START,
  StateGraph,
} from "langgraph";
import {
  AIMessage,
  BaseMessage,
  HumanMessage,
  ToolMessage,
} from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";

const getAccountBalance = tool(
  async ({ accountId }: { accountId: string }) => {
    await new Promise((r) => setTimeout(r, 300));
    return `Account ${accountId} balance is $12,450.22`;
  },
  {
    name: "get_account_balance",
    description: "Fetch an account balance by account ID.",
    schema: z.object({
      accountId: z.string(),
    }),
  }
);
  1. Define graph state and the model node. Keep messages in state and let the model decide when to call the tool. This pattern is stable because the LLM produces tool calls, and your graph executes them explicitly.
const State = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
}).bindTools([getAccountBalance]);

async function callModel(state: typeof State.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}
  1. Add a tool execution node that handles async work correctly. The key detail is reading the last AI message, extracting each tool call, and returning ToolMessage objects with matching tool_call_id values.
async function callTools(state: typeof State.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  if (!(lastMessage instanceof AIMessage)) {
    return { messages: [] };
  }

  const toolMessages = await Promise.all(
    (lastMessage.tool_calls ?? []).map(async (call) => {
      const result = await getAccountBalance.invoke(call.args);
      return new ToolMessage({
        content: result as string,
        tool_call_id: call.id,
      });
    })
  );

  return { messages: toolMessages };
}
  1. Route between the model and tools with conditional edges. If the model requests a tool, execute it; otherwise end the run. This keeps the graph loop explicit and avoids brittle agent logic hidden inside one function.
function shouldContinue(state: typeof State.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  if (lastMessage instanceof AIMessage && (lastMessage.tool_calls?.length ?? 0) > 0) {
    return "tools";
  }
  return END;
}

const graph = new StateGraph(State)
  .addNode("model", callModel)
  .addNode("tools", callTools)
  .addEdge(START, "model")
  1. Finish the graph and run it with a real input. Use streaming if you want observability later, but for validation a plain invoke is enough.
const app = graph
  .addConditionalEdges("model", shouldContinue, {
    tools: "tools",
    [END]: END,
  })
};

async function main() {
}

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides