LangGraph Tutorial (TypeScript): adding tool use for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-tool-use-for-beginnerstypescript

This tutorial shows how to add tool use to a LangGraph agent in TypeScript so the model can decide when to call a function, inspect the result, and continue the conversation. You need this when you want your agent to do more than chat — for example, look up weather, query a database, or fetch account data before answering.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • These packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod
  • An OpenAI API key set as OPENAI_API_KEY
  • A basic LangGraph setup already working
  • A terminal that can run TypeScript files

Step-by-Step

  1. Start with a minimal graph that can call tools. The key pieces are a model that supports tool binding and a tool node that executes whatever the model requests.
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { StateGraph, START, END } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { AIMessage, HumanMessage, BaseMessage } from "@langchain/core/messages";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `The weather in ${city} is sunny and 24°C.`;
  },
  {
    name: "get_weather",
    description: "Get the current weather for a city",
    schema: z.object({
      city: z.string().describe("The city name"),
    }),
  }
);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
}).bindTools([getWeather]);

type GraphState = {
  messages: BaseMessage[];
};
  1. Build the agent node that sends messages to the model. This node reads the conversation state and appends the model response back into the message list.
async function callModel(state: GraphState): Promise<Partial<GraphState>> {
  const response = await model.invoke(state.messages);
  return {
    messages: [...state.messages, response],
  };
}
  1. Add a router that checks whether the model asked for a tool. If there is a tool call, send execution to the tool node; otherwise end the graph.
function shouldContinue(state: GraphState): "tools" | typeof END {
  const lastMessage = state.messages[state.messages.length - 1];

  if (lastMessage instanceof AIMessage && lastMessage.tool_calls?.length) {
    return "tools";
  }

  return END;
}

const toolNode = new ToolNode([getWeather]);
  1. Wire the nodes together in a loop. The graph runs the model first, then optionally runs tools, then sends the updated messages back to the model until no more tool calls remain.
const graph = new StateGraph<GraphState>({
  channels: {
    messages: {
      value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
      defaultValue: () => [],
    },
  },
})
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue, {
    tools: "tools",
    [END]: END,
  })
  .addEdge("tools", "agent")
  .compile();
  1. Run it with a user question that should trigger the tool. The important part is that you pass an initial human message and print every message so you can see the full loop.
async function main() {
  const result = await graph.invoke({
    messages: [new HumanMessage("What's the weather in Nairobi?")],
  });

  for (const message of result.messages) {
    console.log(`${message._getType()}: ${message.content}`);
    if (message instanceof AIMessage && message.tool_calls?.length) {
      console.log("tool calls:", JSON.stringify(message.tool_calls, null, 2));
    }
  }
}

main().catch(console.error);

Testing It

Run the file with your preferred TypeScript runner:

npx tsx src/index.ts

If everything is wired correctly, you should see at least one assistant message with a tool_calls array, followed by a tool result message, then a final assistant answer using that result. If you ask something that does not require a tool, the graph should stop after one assistant response.

A good test is to ask both kinds of questions:

  • “What’s the weather in Nairobi?”
  • “Explain what LangGraph is in one sentence.”

The first should trigger get_weather. The second should return directly without calling tools.

Next Steps

  • Add multiple tools and let the model choose between them
  • Replace the fake weather function with a real HTTP API call
  • Add memory or persistence so conversations survive across requests

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides