LangChain Tutorial (TypeScript): building custom tools for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainbuilding-custom-tools-for-advanced-developerstypescript

This tutorial shows you how to build a custom LangChain tool in TypeScript, wire it into an agent, and make it safe enough for real workflows. You need this when the built-in tools stop being enough and you want the model to call your own business logic, internal APIs, or guarded data access layer.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • npm or pnpm
  • An OpenAI API key
  • Packages:
    • langchain
    • @langchain/openai
    • zod
    • dotenv
  • A .env file with:
    • OPENAI_API_KEY=...

Step-by-Step

  1. Start with a clean project and install the dependencies. I’m using ESM-style imports because that’s where LangChain TypeScript is most predictable right now.
mkdir langchain-custom-tools
cd langchain-custom-tools
npm init -y
npm install langchain @langchain/openai zod dotenv
npm install -D typescript tsx @types/node
npx tsc --init
  1. Create a typed custom tool using DynamicStructuredTool. This is the right choice when your tool needs input validation and you want the model to pass structured arguments instead of a raw string.
import "dotenv/config";
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";

export const customerLookupTool = new DynamicStructuredTool({
  name: "customer_lookup",
  description: "Look up a customer by account id and return their risk tier.",
  schema: z.object({
    accountId: z.string().min(3),
  }),
  func: async ({ accountId }) => {
    const mockDb = {
      ACCT1001: { name: "Amina", riskTier: "low" },
      ACCT2002: { name: "Jon", riskTier: "medium" },
    };

    const record = mockDb[accountId as keyof typeof mockDb];
    return record
      ? JSON.stringify({ accountId, ...record })
      : JSON.stringify({ error: "Customer not found" });
  },
});
  1. Add a second tool that does something operational, not just lookup. Advanced agents are much more useful when they can call tools that mutate state, but you should keep the interface narrow and deterministic.
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";

export const caseNoteTool = new DynamicStructuredTool({
  name: "add_case_note",
  description: "Append a note to a support case.",
  schema: z.object({
    caseId: z.string().min(3),
    note: z.string().min(10),
  }),
  func: async ({ caseId, note }) => {
    const timestamp = new Date().toISOString();
    return JSON.stringify({
      caseId,
      status: "saved",
      timestamp,
      note,
    });
  },
});
  1. Wire both tools into a chat model and create an agent executor. This is where LangChain starts doing real work: the model decides when to call your tools and how to use their outputs in the final answer.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { customerLookupTool } from "./customerLookupTool.js";
import { caseNoteTool } from "./caseNoteTool.js";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a support operations assistant. Use tools when needed."],
  ["human", "{input}"],
  new MessagesPlaceholder("agent_scratchpad"),
]);

const agent = await createOpenAIFunctionsAgent({
  llm,
  tools: [customerLookupTool, caseNoteTool],
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [customerLookupTool, caseNoteTool],
});

const result = await executor.invoke({
  input: "Look up account ACCT1001 and add a note saying the customer requested a callback tomorrow morning.",
});

console.log(result.output);
  1. Run it as an executable script and keep the tool contract explicit. If you let the tool accept vague inputs, your agent will eventually produce garbage arguments and you’ll spend time debugging prompt behavior instead of business logic.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { customerLookupTool } from "./customerLookupTool.js";
import { caseNoteTool } from "./caseNoteTool.js";

async function main() {
  const llm = new ChatOpenAI({ modelName: "gpt-4o-mini", temperature: 0 });

  const prompt = ChatPromptTemplate.fromMessages([
    ["system", "You are a support operations assistant. Use tools when needed."],
    ["human", "{input}"],
    new MessagesPlaceholder("agent_scratchpad"),
  ]);

  const agent = await createOpenAIFunctionsAgent({
    llm,
    tools: [customerLookupTool, caseNoteTool],
    prompt,
  });

  const executor = new AgentExecutor({ agent, tools: [customerLookupTool, caseNoteTool] });

  const result = await executor.invoke({
    input: "Look up account ACCT2002 and save a short follow-up note.",
  });

  console.log(result.output);
}

main();

Testing It

Run the script with npx tsx src/index.ts after setting OPENAI_API_KEY in your environment. You should see the agent call one or both tools and return a natural-language response based on their outputs.

Test three cases:

  • A valid lookup like ACCT1001
  • An unknown account id to verify error handling
  • A request that only needs one tool so you can confirm the agent doesn’t over-call

If the model returns malformed tool arguments, tighten the Zod schema and make the description more specific. If it ignores the tool entirely, increase prompt clarity and keep temperature at 0.

Next Steps

  • Add auth checks inside each tool before touching internal APIs or databases.
  • Replace mock data with real service calls through a repository layer.
  • Learn how to build multi-step agents with memory and structured outputs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides