LangChain Tutorial (TypeScript): adding tool use for advanced developers
This tutorial shows you how to add tool use to a LangChain TypeScript agent so the model can call real functions instead of guessing. You need this when your app has to fetch live data, query internal systems, or perform deterministic actions like calculations and lookups.
What You'll Need
- •Node.js 18+
- •A TypeScript project with
ts-nodeor a build step - •
langchain - •
@langchain/openai - •An OpenAI API key in
OPENAI_API_KEY - •A terminal and basic familiarity with async/await
- •Optional:
dotenvif you want to load env vars from a.envfile
Step-by-Step
- •Start by installing the packages and setting up your environment. I’m using OpenAI models here because the tool-calling API is stable and well-supported in LangChain.
npm install langchain @langchain/openai
npm install -D typescript ts-node @types/node
- •Define a couple of tools with real behavior. Keep them deterministic and small: one for weather lookup, one for math. In production, these would often wrap internal APIs, database queries, or service calls.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
export const getWeather = tool(
async ({ city }) => {
const data: Record<string, string> = {
London: "Cloudy, 14°C",
Nairobi: "Sunny, 26°C",
Boston: "Rainy, 11°C",
};
return data[city] ?? `No weather data for ${city}`;
},
{
name: "get_weather",
description: "Get the current weather for a city.",
schema: z.object({
city: z.string().describe("The city name"),
}),
}
);
export const calculateTotal = tool(
async ({ subtotal, taxRate }) => {
const total = subtotal * (1 + taxRate);
return total.toFixed(2);
},
{
name: "calculate_total",
description: "Calculate a total price including tax.",
schema: z.object({
subtotal: z.number(),
taxRate: z.number().min(0).max(1),
}),
}
);
- •Create the model and bind the tools to it. This is the key step: once bound, the model can emit tool calls instead of only text. Use a model that supports function calling.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const llmWithTools = llm.bindTools([getWeather, calculateTotal]);
- •Build a small agent loop that keeps executing tool calls until the model returns a final answer. This pattern gives you full control over logging, retries, guardrails, and message persistence.
import { AIMessage, HumanMessage, ToolMessage } from "@langchain/core/messages";
async function run() {
const messages = [
new HumanMessage("What's the weather in Nairobi and what's the total on $120 with 16% tax?"),
];
while (true) {
const response = await llmWithTools.invoke(messages);
messages.push(response);
const toolCalls = response.tool_calls ?? [];
if (toolCalls.length === 0) {
console.log(response.content);
break;
}
for (const call of toolCalls) {
const toolResult =
call.name === "get_weather"
? await getWeather.invoke(call.args)
: await calculateTotal.invoke(call.args);
messages.push(new ToolMessage({
content: String(toolResult),
tool_call_id: call.id,
}));
}
}
}
run();
- •If you want cleaner routing across more tools, switch to a lookup map instead of an if/else chain. That matters once you have more than a couple of tools and want predictable dispatch.
const toolsByName = {
get_weather: getWeather,
calculate_total: calculateTotal,
} as const;
async function dispatchTool(name: keyof typeof toolsByName, args: unknown) {
return toolsByName[name].invoke(args as never);
}
Testing It
Run the script with npx ts-node your-file.ts after exporting OPENAI_API_KEY. If everything is wired correctly, the model should request both tools, your code should execute them, and the final output should combine the results into one answer.
Test edge cases too:
- •Ask for an unsupported city and confirm the fallback message is returned.
- •Change the prompt so it only needs one tool and verify it doesn’t call both.
- •Pass malformed inputs in development to make sure Zod rejects bad arguments before they hit your business logic.
If you see plain text answers with no tool calls, check that:
- •You’re using a model that supports tools
- •The tools are actually bound with
bindTools - •Your loop appends
ToolMessageresponses back into the message history
Next Steps
- •Add persistence with Redis or Postgres so multi-turn agents keep state across requests
- •Wrap internal APIs as tools with auth checks and timeout handling
- •Add structured output parsing after tool use so downstream services get typed JSON instead of free-form text
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit