LangGraph Tutorial (TypeScript): adding tool use for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-tool-use-for-intermediate-developerstypescript

This tutorial shows you how to add tool use to a LangGraph agent in TypeScript, end to end. You need this when your graph should do more than chat: fetch live data, query internal systems, or call deterministic functions before deciding the next step.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • These packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod
    • dotenv
  • An OpenAI API key in .env:
    • OPENAI_API_KEY=...

Step-by-Step

  1. Install the dependencies and set up your environment. Keep this boring and explicit; most tool-use bugs come from bad package versions or missing env vars.
npm install @langchain/langgraph @langchain/openai @langchain/core zod dotenv
npm install -D typescript ts-node @types/node
  1. Define a real tool and the agent state. The tool should be deterministic and typed, because LangGraph will route model output into it repeatedly.
import "dotenv/config";
import { z } from "zod";
import { tool } from "@langchain/core/tools";

export const getWeather = tool(
  async ({ city }) => {
    return `Weather in ${city}: 22°C, partly cloudy`;
  },
  {
    name: "get_weather",
    description: "Get the current weather for a city",
    schema: z.object({
      city: z.string().describe("The city name"),
    }),
  }
);

export type AgentState = {
  messages: any[];
};
  1. Build the model node and bind tools to it. In LangGraph, the model does not call tools by magic; you bind them explicitly so the assistant can emit tool calls in the right format.
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
}).bindTools([getWeather]);

export async function callModel(state: AgentState) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const initialState: AgentState = {
  messages: [new HumanMessage("What's the weather in Nairobi?")],
};
  1. Add a tool execution node and wire the graph with conditional routing. The key pattern is simple: if the model requests a tool, route to a tool node; otherwise, finish.
import { StateGraph, START, END, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";

const toolNode = new ToolNode([getWeather]);

function shouldContinue(state: typeof MessagesAnnotation.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  return lastMessage?.tool_calls?.length ? "tools" : END;
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue)
  .addEdge("tools", "agent");
  1. Compile the graph and run it. This loop is what gives you intermediate-agent behavior: the model can think, ask for a tool, read the result, then respond again.
const app = workflow.compile();

const result = await app.invoke(initialState);

for (const message of result.messages) {
  if (message._getType?.() === "human") console.log(`Human: ${message.content}`);
  if (message._getType?.() === "ai") console.log(`AI: ${message.content}`);
}
  1. Put it all together in one file and run it with Node/ts-node. This version is complete and executable as written.
import "dotenv/config";
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { StateGraph, START, END, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";

const getWeather = tool(
  async ({ city }) => `Weather in ${city}: 22°C, partly cloudy`,
  {
    name: "get_weather",
    description: "Get the current weather for a city",
    schema: z.object({ city: z.string() }),
  }
);

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 }).bindTools([getWeather]);
const toolNode = new ToolNode([getWeather]);

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

function shouldContinue(state: typeof MessagesAnnotation.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  return lastMessage?.tool_calls?.length ? "tools" : END;
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue)
  .addEdge("tools", "agent");

const app = workflow.compile();

const result = await app.invoke({
  messages: [new HumanMessage("What's the weather in Nairobi?")],
});

console.log(result.messages.map((m) => `${m._getType?.()}: ${m.content}`).join("\n"));

Testing It

Run the script and watch for two things in the output: first an AI message with a tool call, then a second AI message that uses the weather result. If you only get one response with no tool usage, your model binding or routing condition is wrong.

Test with prompts that clearly require external data:

  • “What’s the weather in Nairobi?”
  • “Look up stock price for X” if you swap in a finance tool
  • “Calculate tax on $12,500” if you add a calculator-style tool

If it fails at runtime:

  • Check OPENAI_API_KEY
  • Confirm your installed package versions support bindTools
  • Make sure your conditional edge checks tool_calls, not plain text

Next Steps

  • Add multiple tools and let the model choose between them.
  • Replace the mock weather function with a real HTTP client.
  • Add state fields for user ID, request ID, and audit logging before deploying this in production.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides