LangChain Tutorial (TypeScript): handling async tools for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainhandling-async-tools-for-intermediate-developerstypescript

This tutorial shows how to wire async tools into a LangChain TypeScript agent without blocking the event loop or breaking tool execution order. You need this when your agent calls APIs, databases, queues, or internal services that return promises and you want predictable behavior under load.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • An OpenAI API key
  • Installed packages:
    • langchain
    • @langchain/openai
    • zod
    • dotenv
  • A project configured for ESM or a TypeScript runtime that supports modern imports
  • Basic familiarity with LangChain agents and tools

Step-by-Step

  1. Start with a clean TypeScript setup and load your API key from the environment. Keep the config boring and explicit; async tool bugs get harder to debug when environment setup is sloppy.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

if (!process.env.OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is missing");
}

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Define an async tool using tool() and return a promise from the implementation. The important part is that the function can await real I/O, which is what you need for fetch calls, database reads, or service lookups.
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const lookupPolicyStatus = tool(
  async ({ policyNumber }) => {
    await new Promise((resolve) => setTimeout(resolve, 300));

    const statusMap: Record<string, string> = {
      P1001: "active",
      P1002: "lapsed",
      P1003: "pending review",
    };

    return statusMap[policyNumber] ?? "not found";
  },
  {
    name: "lookup_policy_status",
    description: "Look up the current status of an insurance policy.",
    schema: z.object({
      policyNumber: z.string().min(1),
    }),
  }
);
  1. Build an agent that can call the tool asynchronously and then execute it through AgentExecutor. This is the part where LangChain handles the loop between model reasoning and tool invocation.
import { createOpenAIToolsAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a support agent. Use tools when needed."],
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const tools = [lookupPolicyStatus];

const agent = await createOpenAIToolsAgent({
  llm: model,
  tools,
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools,
});
  1. Run a query that forces the agent to use the async tool, then inspect the response. Use await all the way through; if you forget it at any layer, you will get half-finished behavior or unhandled promise issues.
const result = await executor.invoke({
  input: "Check policy P1002 and tell me its status.",
});

console.log(result.output);
  1. If your real tool does network I/O, make it resilient before you ship it. Add timeouts, explicit error handling, and deterministic outputs so the agent can recover cleanly instead of hallucinating around failed calls.
const fetchClaimSummary = tool(
  async ({ claimId }) => {
    const controller = new AbortController();
    const timeout = setTimeout(() => controller.abort(), 2000);

    try {
      const response = await fetch(`https://example.com/claims/${claimId}`, {
        signal: controller.signal,
      });

      if (!response.ok) {
        return `claim lookup failed with status ${response.status}`;
      }

      const data = (await response.json()) as { summary?: string };
      return data.summary ?? "no summary available";
    } catch (error) {
      return `claim lookup error: ${(error as Error).message}`;
    } finally {
      clearTimeout(timeout);
    }
  },
  {
    name: "fetch_claim_summary",
    description: "Fetch a claim summary by claim ID.",
    schema: z.object({
      claimId: z.string().min(1),
    }),
  }
);

Testing It

Run the script with ts-node, tsx, or your normal build pipeline and confirm that the final output includes the policy status returned by the tool. Then change the input to a policy number that does not exist and verify that the agent handles the "not found" case instead of crashing.

Next, swap in a real async call such as fetch() against an internal service or mock API and confirm that latency does not block other work in your app. If you want stronger confidence, add a unit test around the tool function itself and a separate integration test around executor.invoke().

Next Steps

  • Add retries and circuit breaking around external async tools
  • Learn how to stream agent responses while tools are running
  • Move from single-tool agents to multi-tool routing with structured outputs

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides