LangGraph Tutorial (TypeScript): building custom tools for beginners
This tutorial shows you how to build a LangGraph agent in TypeScript that can call a custom tool, return structured results, and keep the tool logic isolated from the graph. You need this when the built-in model response is not enough and you want deterministic behavior for things like lookup, validation, formatting, or API calls.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •An OpenAI API key set as
OPENAI_API_KEY - •A project initialized with ESM support
- •Basic familiarity with async/await and TypeScript types
Install the packages:
npm install @langchain/langgraph @langchain/openai @langchain/core zod
Step-by-Step
- •Create a small project setup with a typed state and a simple custom tool.
The tool should do one thing well. For beginners, a good example is a policy lookup helper that turns a policy number into a deterministic result.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
export const lookupPolicyTool = tool(
async ({ policyNumber }: { policyNumber: string }) => {
const mockDatabase: Record<string, { status: string; plan: string }> = {
"POL-1001": { status: "active", plan: "premium" },
"POL-1002": { status: "lapsed", plan: "basic" },
};
return mockDatabase[policyNumber] ?? { status: "not_found", plan: "unknown" };
},
{
name: "lookup_policy",
description: "Look up an insurance policy by policy number.",
schema: z.object({
policyNumber: z.string().describe("The customer's policy number"),
}),
}
);
- •Build the model node and bind the tool to the chat model.
LangGraph works best when the model can decide whether to call the tool or answer directly. Binding tools gives the model access to your custom function in a controlled way.
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const modelWithTools = model.bindTools([lookupPolicyTool]);
const response = await modelWithTools.invoke([
new HumanMessage("Check policy POL-1001 and tell me the status."),
]);
console.log(response);
- •Define graph state and wire the agent loop with a tool node.
The graph needs to store messages so the model can see prior context after each tool call. TheToolNodeexecutes any requested tools and feeds results back into the conversation.
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode([lookupPolicyTool]);
async function callModel(state: typeof MessagesAnnotation.State) {
const result = await modelWithTools.invoke(state.messages);
return { messages: [result] };
}
function shouldContinue(state: typeof MessagesAnnotation.State) {
const lastMessage = state.messages[state.messages.length - 1];
if ("tool_calls" in lastMessage && lastMessage.tool_calls?.length) {
return "tools";
}
return "__end__";
}
- •Add nodes, edges, and compile the graph.
This is the part beginners usually miss: you need a loop from model to tools and back to model until no more tool calls are requested.
const workflow = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue, {
tools: "tools",
__end__: "__end__",
})
.addEdge("tools", "agent");
export const app = workflow.compile();
- •Run the graph with an input message and inspect the final output.
The graph will first let the model decide whether it needs the tool, then execute it, then let the model respond using the returned data.
const finalState = await app.invoke({
messages: [
new HumanMessage("What is the status of policy POL-1002?"),
],
});
const lastMessage = finalState.messages[finalState.messages.length - 1];
console.log(lastMessage.content);
Testing It
Run the file with npx tsx your-file.ts or compile it with tsc if your project is already set up for TypeScript execution. Test at least two inputs: one valid policy number like POL-1001 and one missing value like POL-9999. The first should trigger a useful lookup result, while the second should produce a not-found response through the same tool path.
If you want to confirm tool execution, log inside lookupPolicyTool or inspect intermediate messages after each node run. In production, I also recommend testing malformed inputs so you can verify your schema rejects bad requests before they hit downstream systems.
Next Steps
- •Add a second tool, such as
create_claim_tool, and let the agent choose between them. - •Replace the mock database with a real internal API call and add retry handling.
- •Learn how to persist graph state so conversations survive across requests.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit