LangGraph Tutorial (TypeScript): building custom tools for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphbuilding-custom-tools-for-intermediate-developerstypescript

This tutorial shows how to build custom tools in LangGraph with TypeScript, wire them into an agent loop, and keep the tool layer clean enough for production. You need this when your agent must do more than chat: call internal services, validate inputs, and return structured results your app can trust.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A LangGraph TypeScript project
  • @langchain/langgraph
  • @langchain/openai
  • @langchain/core
  • An OpenAI API key in OPENAI_API_KEY
  • Optional: zod if you want stricter runtime validation for tool inputs

Step-by-Step

  1. Start with a minimal LangGraph setup that uses a chat model and a typed state. The key idea is that tools are just functions with a schema and a predictable return shape.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage, HumanMessage, ToolMessage } from "@langchain/core/messages";

const State = Annotation.Root({
  messages: Annotation<(HumanMessage | AIMessage | ToolMessage)[]>({
    default: () => [],
    reducer: (left, right) => left.concat(right),
  }),
});

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Define a custom tool as a plain async function with explicit input validation. For intermediate work, keep the tool deterministic and return JSON-friendly data so the model can consume it cleanly.
type CustomerLookupInput = {
  customerId: string;
};

async function lookupCustomer(input: CustomerLookupInput) {
  if (!input.customerId || input.customerId.length < 3) {
    throw new Error("customerId must be at least 3 characters");
  }

  return {
    customerId: input.customerId,
    status: "active",
    tier: "gold",
    openClaims: 2,
  };
}
  1. Wrap the tool in a node that reads the latest user message, decides whether to call the tool, and appends the result back into state. This keeps your graph explicit instead of hiding business logic inside prompts.
async function toolNode(state: typeof State.State) {
  const last = state.messages[state.messages.length - 1];

  if (!(last instanceof HumanMessage)) {
    return { messages: [] };
  }

  const text = last.content.toString();
  const match = text.match(/customer\s+(\w+)/i);

  if (!match) {
    return {
      messages: [
        new AIMessage("I need a customer ID. Try: 'lookup customer C123'"),
      ],
    };
  }

  const result = await lookupCustomer({ customerId: match[1] });

  return {
    messages: [
      new AIMessage(
        `Customer ${result.customerId} is ${result.status}, tier ${result.tier}, open claims ${result.openClaims}.`
      ),
    ],
  };
}
  1. Build the graph and connect start to the tool node, then end the flow after the response is produced. For simple custom-tool workflows, this is enough; for multi-step agents, you can add conditional routing later.
const graph = new StateGraph(State)
  .addNode("toolNode", toolNode)
  .addEdge(START, "toolNode")
  .addEdge("toolNode", END)
  .compile();
  1. Invoke the graph with a real input message and print the final response. In production code, this is where you would attach request IDs, tracing, and auth context before calling downstream systems.
async function main() {
  const result = await graph.invoke({
    messages: [new HumanMessage("lookup customer C123")],
  });

  const last = result.messages[result.messages.length - 1];
  console.log(last.content.toString());
}

main().catch(console.error);

Testing It

Run the file with tsx or compile it with tsc and run the output with Node. If everything is wired correctly, lookup customer C123 should produce a structured summary like Customer C123 is active....

Test two failure cases next:

  • Send a message without a customer ID and confirm you get the fallback prompt.
  • Send an invalid ID like lookup customer x and confirm your validation path rejects it.

If you want to verify behavior at the graph level, log state.messages inside each node and confirm the reducer preserves message order across invocations.

Next Steps

  • Replace the regex parser with real tool schemas using zod or LangChain tool definitions.
  • Add conditional edges so the graph can decide when to call tools versus answer directly.
  • Wrap your custom tool with retry logic and observability before connecting it to internal APIs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides