LangChain Tutorial (TypeScript): adding tool use for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-tool-use-for-beginnerstypescript

This tutorial shows you how to add tool use to a LangChain TypeScript agent so it can call real functions instead of guessing. You need this when your app has tasks like fetching live data, doing calculations, or looking up internal systems where the model should not invent answers.

What You'll Need

  • Node.js 18+
  • A TypeScript project
  • langchain
  • @langchain/openai
  • An OpenAI API key in OPENAI_API_KEY
  • A .env file or another way to load environment variables
  • Basic familiarity with async/await and LangChain chat models

Step-by-Step

  1. Install the packages and set up your environment.
    This example uses OpenAI chat models plus LangChain’s built-in tool support.
npm install langchain @langchain/openai dotenv zod
  1. Create a simple tool that the model can call.
    Keep the tool focused and deterministic. For beginners, a calculator-style tool is the easiest place to start.
import "dotenv/config";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

export const multiplyTool = tool(
  async ({ a, b }) => {
    return String(a * b);
  },
  {
    name: "multiply",
    description: "Multiply two numbers together.",
    schema: z.object({
      a: z.number().describe("First number"),
      b: z.number().describe("Second number"),
    }),
  }
);
  1. Load a chat model and bind the tool to it.
    Binding tells the model which tools exist and lets LangChain handle the function-calling format for you.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { multiplyTool } from "./multiplyTool";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const modelWithTools = model.bindTools([multiplyTool]);

async function main() {
  const response = await modelWithTools.invoke(
    new HumanMessage("What is 12 times 9?")
  );

  console.log(response);
}

main();
  1. Execute the tool call returned by the model.
    The first model response usually contains a tool request, not the final answer. You inspect tool_calls, run the matching tool, then send the result back to the model.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, ToolMessage } from "@langchain/core/messages";
import { multiplyTool } from "./multiplyTool";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const modelWithTools = model.bindTools([multiplyTool]);

async function main() {
  const firstResponse = await modelWithTools.invoke(
    new HumanMessage("What is 12 times 9?")
  );

  const toolCall = firstResponse.tool_calls?.[0];
  if (!toolCall) {
    console.log(firstResponse.content);
    return;
  }

  const result = await multiplyTool.invoke(toolCall.args);

  const finalResponse = await model.invoke([
    new HumanMessage("What is 12 times 9?"),
    firstResponse,
    new ToolMessage({
      content: result,
      tool_call_id: toolCall.id,
    }),
  ]);

  console.log(finalResponse.content);
}

main();
  1. Wrap it in a small reusable pattern for multiple tools.
    Once this works with one tool, you can add more by keeping the same flow: bind tools, inspect calls, execute them, and feed results back.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, ToolMessage } from "@langchain/core/messages";
import { multiplyTool } from "./multiplyTool";

const tools = [multiplyTool];
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const modelWithTools = model.bindTools(tools);

async function main() {
  const messages = [new HumanMessage("What is 7 times 8?")];
  const firstResponse = await modelWithTools.invoke(messages);

  if (!firstResponse.tool_calls?.length) {
    console.log(firstResponse.content);
    return;
  }

  messages.push(firstResponse);

  for (const call of firstResponse.tool_calls) {
    const selectedTool = tools.find((t) => t.name === call.name);
    if (!selectedTool) continue;

    const output = await selectedTool.invoke(call.args);
    messages.push(
      new ToolMessage({
        content: output,
        tool_call_id: call.id,
      })
    );
  }

  const finalResponse = await model.invoke(messages);
  console.log(finalResponse.content);
}

main();

Testing It

Run the script with npx tsx your-file.ts or compile it with tsc and run the output with Node. If everything is wired correctly, you should see the assistant ask for or directly produce a result based on the tool output rather than making up arithmetic.

Test two cases:

  • A prompt that clearly needs the tool, like “What is 12 times 9?”
  • A prompt that does not need a tool, like “Explain what multiplication means.”

If you get no tool_calls, check that:

  • Your API key is loaded
  • The model supports tools
  • You are using bindTools(...) before invoking the message

Next Steps

  • Add a second tool, like a date lookup or weather API wrapper
  • Learn how to loop over multiple tool calls until the assistant returns a final answer
  • Move from manual orchestration to LangChain agents when you need more complex decision-making

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides