LangChain Tutorial (TypeScript): handling async tools for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainhandling-async-tools-for-beginnerstypescript

This tutorial shows you how to build a LangChain TypeScript agent that can call async tools correctly, wait for their results, and return a final answer without race conditions. You need this when your tool does real work like hitting an API, querying a database, or calling internal services that return Promises.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key
  • A LangChain-compatible project setup
  • Packages:
    • langchain
    • @langchain/openai
    • zod
    • dotenv

Step-by-Step

  1. Set up your project and install the dependencies. This example uses ESM-style imports, so make sure your TypeScript config supports them.
npm init -y
npm i langchain @langchain/openai zod dotenv
npm i -D typescript tsx @types/node
  1. Create a simple async tool. The important part is that the function returns a Promise, because LangChain will await it before giving the result back to the model.
import "dotenv/config";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

export const getAccountStatus = tool(
  async ({ accountId }: { accountId: string }) => {
    await new Promise((resolve) => setTimeout(resolve, 500));
    return JSON.stringify({
      accountId,
      status: "active",
      balance: 1240.55,
      currency: "USD",
    });
  },
  {
    name: "get_account_status",
    description: "Fetch the current status and balance for an account.",
    schema: z.object({
      accountId: z.string().describe("The account identifier"),
    }),
  }
);
  1. Wire the tool into a chat model and bind it. For beginners, this is the cleanest way to let the model decide when to call the async tool.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { getAccountStatus } from "./tool.js";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const modelWithTools = model.bindTools([getAccountStatus]);

const response = await modelWithTools.invoke([
  new HumanMessage("Check account A-100 and tell me if it's healthy."),
]);

console.log(response.tool_calls);
  1. Execute the tool calls and send the results back to the model. This is where async handling matters most: you must await each tool call before asking the model for the final response.
import { ToolMessage } from "@langchain/core/messages";
import { getAccountStatus } from "./tool.js";

async function run() {
  const initial = await modelWithTools.invoke([
    new HumanMessage("Check account A-100 and tell me if it's healthy."),
  ]);

  const toolMessages = await Promise.all(
    (initial.tool_calls ?? []).map(async (call) => {
      if (call.name === "get_account_status") {
        const result = await getAccountStatus.invoke(call.args);
        return new ToolMessage({
          content: result,
          tool_call_id: call.id!,
        });
      }

      throw new Error(`Unknown tool: ${call.name}`);
    })
  );

  const final = await modelWithTools.invoke([
    new HumanMessage("Check account A-100 and tell me if it's healthy."),
    initial,
    ...toolMessages,
  ]);

  console.log(final.content);
}

run();
  1. Put it all in one runnable file so you can test it locally. This version uses one async tool and prints both the raw tool call and the final answer.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, ToolMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getAccountStatus = tool(
  async ({ accountId }: { accountId: string }) => {
    await new Promise((resolve) => setTimeout(resolve, 500));
    return JSON.stringify({
      accountId,
      status: "active",
      balance: 1240.55,
      currency: "USD",
    });
  },
  {
    name: "get_account_status",
    description: "Fetch the current status and balance for an account.",
    schema: z.object({ accountId: z.string() }),
  }
);

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const modelWithTools = model.bindTools([getAccountStatus]);

async function main() {
  const first = await modelWithTools.invoke([
    new HumanMessage("Check account A-100 and tell me if it's healthy."),
  ]);

  console.log("Tool calls:", first.tool_calls);

  const toolMessages = await Promise.all(
    (first.tool_calls ?? []).map(async (call) => {
      const result = await getAccountStatus.invoke(call.args);
      return new ToolMessage({
        content: result,
        tool_call_id: call.id!,
      });
    })
  );

  const final = await modelWithTools.invoke([
    new HumanMessage("Check account A-100 and tell me if it's healthy."),
    first,
    ...toolMessages,
  ]);

  console.log("Final:", final.content);
}

main();

Testing It

Run the file with npx tsx your-file.ts. If everything is wired correctly, you should see at least one tool call printed first, then a final natural-language response that uses the JSON returned by the tool.

If you get no tool_calls, your prompt may not be forcing a lookup strongly enough, or your model may be ignoring tools because of configuration issues. If you see errors about missing API keys, confirm OPENAI_API_KEY is set in your environment or .env file.

For debugging, log first.additional_kwargs and verify that your tool name matches exactly between bindTools() and your manual dispatcher. In production code, always handle unknown tools explicitly instead of assuming only one exists.

Next Steps

  • Add multiple async tools and route them with a dispatcher instead of hardcoding one if block.
  • Wrap each external call with timeout and retry logic so slow APIs do not hang your agent.
  • Move from manual orchestration to LangChain agents or LangGraph once you need multi-step workflows.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides