LangChain Tutorial (TypeScript): adding tool use for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-tool-use-for-intermediate-developerstypescript

This tutorial shows you how to add tool use to a LangChain TypeScript agent so it can call real functions instead of guessing. You need this when your assistant must fetch data, run business logic, or interact with internal systems before answering.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • langchain
  • @langchain/openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with async/await and LangChain chat models

Step-by-Step

  1. Install the packages and set up your environment.
    Use a clean TypeScript project so you can run the example without extra wiring.
npm install langchain @langchain/openai zod dotenv
npm install -D typescript ts-node @types/node
  1. Create a chat model and define a tool with a strict schema.
    Tools work best when the model gets clear input boundaries, so use zod for validation.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "langchain/tools";
import { z } from "zod";

const getPolicyStatus = tool(
  async ({ policyId }) => {
    const policies: Record<string, string> = {
      "POL-1001": "active",
      "POL-2002": "pending renewal",
      "POL-3003": "cancelled",
    };

    return policies[policyId] ?? "policy not found";
  },
  {
    name: "get_policy_status",
    description: "Look up the current status of an insurance policy by policy ID.",
    schema: z.object({
      policyId: z.string().describe("The insurance policy ID, like POL-1001"),
    }),
  }
);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Bind the tool to the model and create an agent executor.
    This is the important part: without binding, the model can mention tools but cannot actually call them.
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";

const tools = [getPolicyStatus];

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a support assistant. Use tools when needed."],
  ["human", "{input}"],
  new MessagesPlaceholder("agent_scratchpad"),
]);

const agent = await createOpenAIFunctionsAgent({
  llm: model,
  tools,
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools,
});
  1. Send a question that requires the tool and print the result.
    The agent should decide to call get_policy_status, then use the returned value in its final answer.
const result = await executor.invoke({
  input: "What is the status of policy POL-1001?",
});

console.log(result.output);
  1. Add a second tool so you can see routing behavior clearly.
    Real agents usually need more than one tool, and this is where you start seeing practical value.
const calculatePremium = tool(
  async ({ age, coverageAmount }) => {
    const baseRate = age < 30 ? 0.02 : age < 50 ? 0.03 : 0.05;
    const premium = Math.round(coverageAmount * baseRate);
    return `Estimated monthly premium: $${premium}`;
  },
  {
    name: "calculate_premium",
    description: "Estimate an insurance premium from age and coverage amount.",
    schema: z.object({
      age: z.number().int().min(18).max(100),
      coverageAmount: z.number().positive(),
    }),
  }
);

tools.push(calculatePremium);

Testing It

Run the script with npx ts-node your-file.ts and ask for something that clearly maps to one of the tools, like a policy lookup or premium estimate. If the setup is correct, you’ll see a natural language answer that reflects real tool output instead of a generic guess.

Try inputs that should not hit a tool, such as “Explain what policy status means,” and confirm the model answers directly. Then try malformed inputs like “policy abc” or missing numbers for premium calculation to verify your schema is enforcing structure.

If you want extra confidence, log inside each tool function so you can see exactly when it gets called. In production, that same pattern helps with audit trails and debugging agent behavior.

Next Steps

  • Add memory only after you understand tool routing; otherwise debugging gets messy fast.
  • Replace toy functions with real APIs from your billing, CRM, or claims systems.
  • Learn structured output next so you can combine tool use with strict JSON responses for downstream services.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides