LangGraph Tutorial (TypeScript): testing agents locally for beginners
This tutorial shows you how to build a small LangGraph agent in TypeScript and test it locally with a real harness. You need this when you want fast feedback on graph behavior, tool calls, and message flow without deploying anything or wiring up a UI.
What You'll Need
- •Node.js 18+ installed
- •A TypeScript project with
ts-nodeortsx - •Packages:
- •
@langchain/langgraph - •
@langchain/core - •
@langchain/openai - •
dotenv
- •
- •An OpenAI API key in
.env - •Basic familiarity with LangGraph nodes, edges, and state
Step-by-Step
- •Start with a minimal project setup and install the dependencies. Keep the environment simple so you can run the graph from the terminal and inspect output directly.
npm init -y
npm install @langchain/langgraph @langchain/core @langchain/openai dotenv
npm install -D typescript tsx @types/node
npx tsc --init
- •Create a small agent graph with one model node and one tool node. This example uses a calculator tool so you can verify that the agent is actually calling tools, not just returning text.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { StateGraph, START, END, Annotation } from "@langchain/langgraph";
const add = tool(
async ({ a, b }: { a: number; b: number }) => `${a + b}`,
{
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number(),
b: z.number(),
}),
}
);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
}).bindTools([add]);
- •Define the graph state and the routing logic. The router checks whether the last assistant message requested a tool call; if yes, it sends execution to the tools node, otherwise it ends the run.
const MessagesState = Annotation.Root({
messages: Annotation<any[]>({
default: () => [],
reducer: (left, right) => left.concat(right),
}),
});
const graph = new StateGraph(MessagesState)
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
})
.addNode("tools", async (state) => {
const last = state.messages[state.messages.length - 1] as AIMessage;
const toolCall = last.tool_calls?.[0];
if (!toolCall) return { messages: [] };
const result = await add.invoke(toolCall.args);
return {
messages: [
new ToolMessage({
content: result,
tool_call_id: toolCall.id,
}),
],
};
})
.addEdge(START, "agent")
.addConditionalEdges("agent", (state) => {
const last = state.messages[state.messages.length - 1] as AIMessage;
return last.tool_calls?.length ? "tools" : END;
})
.addEdge("tools", "agent");
- •Compile the graph and run it locally with a fixed prompt. This is the part that makes local testing useful: you can print every message and see exactly how the agent moves through the graph.
const app = graph.compile();
async function main() {
const result = await app.invoke({
messages: [new HumanMessage("What is 12 + 30? Use the tool.")],
});
for (const msg of result.messages) {
console.log(`\n${msg._getType().toUpperCase()}: ${msg.content}`);
}
}
main().catch(console.error);
- •Add a repeatable test file so you can run checks during development. This is better than manually reading console output every time because it gives you a deterministic assertion on behavior.
import assert from "node:assert/strict";
import { HumanMessage } from "@langchain/core/messages";
async function testAgent(app: any) {
const result = await app.invoke({
messages: [new HumanMessage("What is 2 + 3? Use the tool.")],
});
const final = result.messages[result.messages.length - 1];
assert.equal(final._getType(), "ai");
assert.match(String(final.content), /5|2 \+ 3/i);
}
export { testAgent };
- •Wire everything together in one file or split it into modules once it works. For beginners, keeping one executable entrypoint reduces friction while you validate the graph locally.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { StateGraph, START, END, Annotation } from "@langchain/langgraph";
const add = tool(async ({ a, b }: { a: number; b: number }) => `${a + b}`, {
name: "add",
description: "Add two numbers",
schema: z.object({ a: z.number(), b: z.number() }),
});
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 }).bindTools([add]);
const MessagesState = Annotation.Root({
messages: Annotation<any[]>({ default: () => [], reducer: (l, r) => l.concat(r) }),
});
const app = new StateGraph(MessagesState)
Testing It
Run your script with npx tsx index.ts after setting OPENAI_API_KEY in .env. If everything is wired correctly, you should see an assistant message, then a tool message containing 42, then a final assistant response that uses that result.
If the agent never calls the tool, check that bindTools([add]) is present on the model and that your prompt explicitly asks to use the tool. If you get type errors around messages or tools, make sure your package versions are aligned and that zod is installed because LangChain tools use it for schemas.
For local debugging, print each message type in order and confirm the loop behaves like this:
| Step | Expected behavior |
|---|---|
| Human message | User asks for arithmetic |
| AI message | Model emits a tool call |
| Tool message | Tool returns computed value |
| AI message | Model answers using tool output |
Next Steps
- •Add more tools and route based on different
tool_calls - •Replace console assertions with Jest or Vitest tests
- •Learn LangGraph checkpointing so you can resume agent runs during debugging
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit