LangGraph Tutorial (TypeScript): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-tool-use-for-advanced-developerstypescript

This tutorial shows how to wire tool use into a LangGraph agent in TypeScript so the model can decide when to call external functions, execute them safely, and continue the conversation with the result. You need this when your agent has to do more than chat: fetch live data, query internal services, or perform deterministic actions before responding.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • @langchain/langgraph
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key in OPENAI_API_KEY
  • A project set up with ESM or a build pipeline that supports modern imports
  • Basic familiarity with LangGraph nodes, edges, and state

Install the packages:

npm install @langchain/langgraph @langchain/openai @langchain/core

Step-by-Step

  1. Start by defining a typed state that carries messages through the graph. For tool use, you want the model’s messages plus any tool outputs returned by your executor.
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";

export const AgentState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});
  1. Create a real tool and bind it to the chat model. The model needs access to the tool schema so it can emit tool calls instead of guessing.
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `Weather for ${city}: 22°C, clear skies`;
  },
  {
    name: "get_weather",
    description: "Get current weather for a city",
    schema: z.object({
      city: z.string().describe("The city name"),
    }),
  }
);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
}).bindTools([getWeather]);
  1. Add an agent node that sends the conversation to the model. The important detail is that the node returns only the new AI message; LangGraph will append it using your reducer.
import { AIMessage } from "@langchain/core/messages";

async function agentNode(state: typeof AgentState.State) {
  const response = await model.invoke(state.messages);
  return {
    messages: [response as AIMessage],
  };
}
  1. Add a tool node and route between the agent and tools based on whether the model requested a tool call. This is where LangGraph stops being a plain chat loop and becomes an execution graph.
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { END, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";

const tools = new ToolNode([getWeather]);

function shouldContinue(state: typeof AgentState.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  if (lastMessage instanceof AIMessage && lastMessage.tool_calls?.length) {
    return "tools";
  }
  return END;
}

const graph = new StateGraph(AgentState)
  .addNode("agent", agentNode)
  .addNode("tools", tools)
  .addEdge("__start__", "agent")
  .addConditionalEdges("agent", shouldContinue, {
    tools: "tools",
    [END]: END,
  })
  .addEdge("tools", "agent");
  1. Compile the graph and run it with a user question that should trigger tool use. If everything is wired correctly, you’ll see one model turn request the weather tool and a later turn answer with the tool result.
import { HumanMessage } from "@langchain/core/messages";

const app = graph.compile();

async function main() {
  const result = await app.invoke({
    messages: [new HumanMessage("What's the weather in Nairobi?")],
  });

  console.log(result.messages.map((m) => ({
    type: m._getType(),
    content: m.content,
    tool_calls: (m as any).tool_calls ?? [],
  })));
}

main().catch(console.error);

Testing It

Run the file with your TypeScript runtime or compile it first with tsc. The key signal is that the first assistant message contains a tool_calls entry for get_weather, followed by a tool message and then a final assistant answer.

Test one prompt that clearly requires the tool and one prompt that does not. For example, “What’s the weather in Nairobi?” should call the tool, while “Say hello” should usually return directly without any tool execution.

If you want to verify routing behavior more aggressively, log each node’s input and output before compiling to production. In real systems, this is where you catch malformed tool arguments, missing API keys, or models that are not actually bound to tools.

Next Steps

  • Add multiple tools and use stronger routing rules for each domain action
  • Persist thread state with checkpointers so conversations survive restarts
  • Add guardrails around tools that touch internal APIs or customer data

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides