LangGraph Tutorial (TypeScript): building custom tools for advanced developers
This tutorial shows how to build a LangGraph workflow in TypeScript that calls custom tools, routes between them with real control flow, and returns structured output. You need this when the built-in agent loop is too loose and you want deterministic tool execution, typed state, and predictable behavior in production.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •An OpenAI API key set as
OPENAI_API_KEY - •A basic TypeScript project with
"type": "module"or ESM-compatible config
Install the packages:
npm install @langchain/langgraph @langchain/openai @langchain/core
Step-by-Step
- •Start by defining a typed graph state and two custom tools. One tool does a simple lookup, the other formats a response you can return to the caller. In production, keep tools small and deterministic.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
export const lookupCustomer = tool(
async ({ customerId }: { customerId: string }) => {
const db = {
c_1001: { name: "Amina", tier: "gold" },
c_1002: { name: "Jon", tier: "silver" },
};
return db[customerId] ?? { error: "Customer not found" };
},
{
name: "lookup_customer",
description: "Fetch a customer record by ID",
schema: z.object({
customerId: z.string(),
}),
}
);
export const formatSummary = tool(
async ({ name, tier }: { name: string; tier: string }) => {
return `Customer ${name} is on the ${tier} tier.`;
},
{
name: "format_summary",
description: "Format a customer summary string",
schema: z.object({
name: z.string(),
tier: z.string(),
}),
}
);
- •Create the graph state and the model node. The state holds messages only, which is enough for a clean tool-calling loop without extra baggage.
import { Annotation, START, END, StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
const GraphState = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
const model = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0,
});
- •Bind the tools to the model and add a node that decides whether to call them. The key pattern here is that the model sees the full conversation and returns either normal text or tool calls.
const tools = [lookupCustomer, formatSummary];
const modelWithTools = model.bindTools(tools);
async function assistantNode(state: typeof GraphState.State) {
const response = await modelWithTools.invoke(state.messages);
return { messages: [response] };
}
- •Add a tool execution node that runs whatever tool the model requested. This keeps execution explicit instead of hiding it behind an opaque agent wrapper.
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode(tools);
function routeNext(state: typeof GraphState.State) {
const lastMessage = state.messages[state.messages.length - 1];
if (lastMessage?.tool_calls?.length) {
return "tools";
}
return END;
}
- •Wire everything together into a graph with conditional routing. This gives you a controlled loop: model -> tools -> model -> end.
const graph = new StateGraph(GraphState)
.addNode("assistant", assistantNode)
.addNode("tools", toolNode)
.addEdge(START, "assistant")
.addConditionalEdges("assistant", routeNext, {
tools: "tools",
[END]: END,
})
.addEdge("tools", "assistant")
.compile();
- •Run the graph with a user request that forces tool usage. After execution, inspect the final messages so you can see both the tool call and the final answer.
const result = await graph.invoke({
messages: [
new HumanMessage("Look up customer c_1001 and summarize their tier."),
],
});
const finalMessage = result.messages[result.messages.length - 1];
console.log(finalMessage.content);
Testing It
Run the file with tsx or compile it with tsc and execute it with Node. You should see the assistant first call lookup_customer, then pass that result into format_summary, then return a final natural-language response.
If it stops after one assistant message, your routing function is wrong or your model did not emit tool calls. If you get an auth error, confirm OPENAI_API_KEY is set in your environment before running the script.
For a stronger test, change c_1001 to an unknown ID and confirm your tool returns an error object instead of throwing. That tells you your graph handles failure as data, which is what you want in banking or insurance workflows.
Next Steps
- •Add message trimming so long-running conversations do not blow up token usage.
- •Replace mock data with real service adapters behind each tool.
- •Add structured output validation for downstream systems using Zod schemas.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit