LangGraph Tutorial (TypeScript): testing agents locally for advanced developers
This tutorial shows how to run and test a LangGraph agent locally in TypeScript without wiring it into a web app first. You need this when you want fast feedback on graph behavior, tool calls, state transitions, and retries before you ship the agent behind an API.
What You'll Need
- •Node.js 20+
- •A TypeScript project with ESM enabled
- •
@langchain/langgraph - •
@langchain/openai - •
@langchain/core - •
zod - •An OpenAI API key in
OPENAI_API_KEY - •Optional but useful:
- •
tsxfor running TypeScript directly - •
vitestornode:testfor assertions
- •
Install the dependencies:
npm install @langchain/langgraph @langchain/openai @langchain/core zod
npm install -D typescript tsx @types/node
Step-by-Step
- •Create a small project that keeps the agent logic isolated from your app code. For local testing, treat the graph as a pure function over state and make sure your model config comes from environment variables.
// src/agent.ts
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, END, START, StateGraph } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
const getPolicyStatus = tool(
async ({ policyId }) => {
return `Policy ${policyId} is active`;
},
{
name: "get_policy_status",
description: "Look up the status of an insurance policy",
schema: z.object({
policyId: z.string(),
}),
}
);
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
}).bindTools([getPolicyStatus]);
- •Add a node that calls the model and a node that executes tools. This gives you a minimal production-style loop: model decides, tools run, then the model can continue with tool results.
async function callModel(state: typeof State.State) {
const response = await model.invoke(state.messages);
return { messages: [response] };
}
async function callTools(state: typeof State.State) {
const lastMessage = state.messages[state.messages.length - 1];
const toolCalls = lastMessage.tool_calls ?? [];
const results = await Promise.all(
toolCalls.map(async (call: any) => {
if (call.name === "get_policy_status") {
const output = await getPolicyStatus.invoke(call.args);
return {
role: "tool",
content: output,
tool_call_id: call.id,
};
}
throw new Error(`Unknown tool: ${call.name}`);
})
);
return { messages: results };
}
- •Wire the graph with conditional routing so it only hits tools when the model asks for them. This is the part you want to test locally because most bugs happen in routing, not in prompt text.
function shouldContinue(state: typeof State.State) {
const lastMessage = state.messages[state.messages.length - 1];
return lastMessage.tool_calls?.length ? "tools" : END;
}
const graph = new StateGraph(State)
.addNode("agent", callModel)
.addNode("tools", callTools)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, {
tools: "tools",
[END]: END,
})
.addEdge("tools", "agent");
export const app = graph.compile();
- •Create a local runner that prints every message so you can inspect behavior without debugging inside an HTTP handler. Keep inputs deterministic and small; that makes regressions obvious when you change prompts or tools.
// src/run.ts
import { HumanMessage } from "@langchain/core/messages";
import { app } from "./agent.js";
const result = await app.invoke({
messages: [
new HumanMessage("Check policy POL-123 and tell me if it's active."),
],
});
for (const message of result.messages) {
console.log({
role: message._getType?.() ?? message.role,
content: message.content,
tool_calls: message.tool_calls,
});
}
- •Add an actual test so you can verify the graph works under CI. For advanced teams, this is where you lock down routing behavior and make sure tool execution still works after prompt edits.
// test/agent.test.ts
import test from "node:test";
import assert from "node:assert/strict";
import { HumanMessage } from "@langchain/core/messages";
import { app } from "../src/agent.js";
test("agent returns a tool-backed answer path", async () => {
const result = await app.invoke({
messages: [new HumanMessage("Check policy POL-123 status.")],
});
assert.ok(result.messages.length >= 2);
});
Testing It
Run the local script first:
npx tsx src/run.ts
You should see at least one assistant message and, if the model chooses the tool, one tool message followed by another assistant turn. If you only see one assistant response, inspect whether your prompt actually encourages tool use or whether the model answered directly.
Then run your tests:
node --test test/agent.test.ts
If this fails intermittently, pin your model version and keep temperature at zero during tests. That gives you stable routing behavior and makes failures about code changes instead of sampling noise.
Next Steps
- •Add checkpointing with a persistent store so you can resume conversations across process restarts.
- •Replace the manual tool dispatcher with more structured multi-tool routing and branch-specific tests.
- •Add snapshot tests for
result.messagesso prompt drift shows up immediately in CI.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit