LangChain Tutorial (TypeScript): handling async tools for advanced developers
This tutorial shows how to build a LangChain TypeScript agent that can call async tools correctly, wait for their results, and keep the tool contract clean under real-world latency. You need this when your tools hit databases, internal APIs, queues, or file systems and you cannot afford race conditions or broken tool outputs.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •An OpenAI API key
- •A LangChain-compatible chat model package:
- •
langchain - •
@langchain/openai
- •
- •A project initialized with ESM support
- •Basic familiarity with:
- •
Runnableconcepts - •tool calling in LangChain
- •async/await in TypeScript
- •
Install the packages:
npm install langchain @langchain/openai zod
npm install -D typescript tsx @types/node
Set your API key:
export OPENAI_API_KEY="your_key_here"
Step-by-Step
- •Start with a typed async tool. The important part is that the tool returns a Promise and validates input with Zod, because agent tooling fails in ugly ways when arguments drift.
import { DynamicStructuredTool } from "langchain/tools";
import { z } from "zod";
export const fetchPolicyStatus = new DynamicStructuredTool({
name: "fetch_policy_status",
description: "Fetch the current policy status by policy number.",
schema: z.object({
policyNumber: z.string().min(6),
}),
func: async ({ policyNumber }) => {
await new Promise((r) => setTimeout(r, 300));
return JSON.stringify({
policyNumber,
status: "active",
updatedAt: new Date().toISOString(),
});
},
});
- •Wire the tool into a chat model that supports tool calling. With LangChain JS, you bind tools to the model first, then let the agent decide when to call them.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { fetchPolicyStatus } from "./fetchPolicyStatus.js";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const llmWithTools = model.bindTools([fetchPolicyStatus]);
const result = await llmWithTools.invoke([
new HumanMessage("Check policy ABC123 and tell me if it is active."),
]);
console.log(result.tool_calls);
- •Execute the tool call manually so you can see the async boundary clearly. This is the pattern you want when debugging advanced flows or building custom orchestration around LangChain.
import { ToolMessage } from "@langchain/core/messages";
import { fetchPolicyStatus } from "./fetchPolicyStatus.js";
const toolCall = result.tool_calls?.[0];
if (!toolCall) {
throw new Error("Model did not request a tool call.");
}
const toolResult = await fetchPolicyStatus.invoke(toolCall.args);
const followUp = await llmWithTools.invoke([
new HumanMessage("Check policy ABC123 and tell me if it is active."),
result,
new ToolMessage({
content: toolResult,
tool_call_id: toolCall.id,
}),
]);
console.log(followUp.content);
- •Move to an agent executor when you want LangChain to manage the loop for you. This is the production-friendly version once your tools are stable and you no longer want to manually route every call.
import { createToolCallingAgent, AgentExecutor } from "langchain/agents";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { fetchPolicyStatus } from "./fetchPolicyStatus.js";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = PromptTemplate.fromTemplate(`
You are a support assistant.
Use tools when needed.
Question: {input}
{agent_scratchpad}
`);
const agent = await createToolCallingAgent({
llm: model,
tools: [fetchPolicyStatus],
prompt,
});
const executor = new AgentExecutor({
agent,
tools: [fetchPolicyStatus],
});
const response = await executor.invoke({
input: "Is policy ABC123 active?",
});
console.log(response.output);
- •Handle multiple async tools with deterministic behavior. If one request needs several remote lookups, run them in parallel inside the tool layer instead of making the model guess about timing.
import { DynamicStructuredTool } from "langchain/tools";
import { z } from "zod";
export const enrichClaim = new DynamicStructuredTool({
name: "enrich_claim",
description: "Fetch claim and customer data in parallel.",
schema: z.object({
claimId: z.string(),
customerId: z.string(),
}),
func: async ({ claimId, customerId }) => {
const [claim, customer] = await Promise.all([
Promise.resolve({ claimId, status: "open" }),
Promise.resolve({ customerId, segment: "premium" }),
]);
return JSON.stringify({ claim, customer });
},
});
Testing It
Run each file with tsx so you get native TypeScript execution without a build step:
npx tsx src/main.ts
Verify three things:
- •The model emits a
tool_callsarray for requests that need external data. - •The async tool resolves before the final answer is generated.
- •Your final response uses
ToolMessagecontext rather than hallucinating a status.
If you see empty tool calls, check that your model supports tool calling and that your prompt does not over-constrain the output format. If you see malformed arguments, tighten your Zod schema and keep your tool descriptions specific.
Next Steps
- •Add timeout handling and retries around slow tools using
Promise.raceand backoff. - •Wrap internal APIs with circuit breakers before exposing them as agent tools.
- •Learn LangGraph next if you need branching workflows, human approval steps, or durable execution across multiple async actions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit