LangGraph Tutorial (TypeScript): testing agents locally for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphtesting-agents-locally-for-intermediate-developerstypescript

This tutorial shows you how to run and test a LangGraph agent locally in TypeScript, without wiring it into a full app first. The goal is to make your agent observable, deterministic enough for debugging, and easy to iterate on before you ship it behind an API or UI.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project with ts-node or tsx
  • langgraph package
  • @langchain/openai package
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic familiarity with:
    • LangGraph nodes and edges
    • StateGraph
    • async/await in TypeScript

Install the packages:

npm install langgraph @langchain/openai
npm install -D typescript tsx @types/node

Step-by-Step

  1. Start by defining a small state object and a single model node. For local testing, keep the graph simple: one node, one transition, and a typed state so failures show up early.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, StateGraph, START, END } from "langgraph";

const State = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

async function callModel(state: typeof State.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}
  1. Build the graph and compile it into an executable app. This is the point where LangGraph becomes testable locally because you can invoke it directly from a script or test runner.
const graph = new StateGraph(State)
  .addNode("model", callModel)
  .addEdge(START, "model")
  .addEdge("model", END);

const app = graph.compile();
  1. Create a local runner that feeds in a fixed input and prints the final state. Use a deterministic prompt while testing so you can compare outputs across runs.
async function main() {
  const result = await app.invoke({
    messages: [
      {
        role: "user",
        content: "Write one sentence explaining what LangGraph is.",
      },
    ],
  });

  console.log(JSON.stringify(result, null, 2));
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. Add streaming when you need to inspect intermediate behavior. This is useful when your graph grows beyond one node and you want to see state changes as they happen.
async function streamRun() {
  const stream = await app.stream({
    messages: [
      {
        role: "user",
        content: "Give me a short definition of an agent.",
      },
    ],
  });

  for await (const chunk of stream) {
    console.log("CHUNK:", JSON.stringify(chunk, null, 2));
  }
}

streamRun().catch(console.error);
  1. Wrap the script in an npm command so you can rerun it quickly while editing nodes. Local agent testing gets much easier when you can iterate with one command instead of manually opening files each time.
{
  "name": "langgraph-local-test",
  "private": true,
  "type": "module",
  "scripts": {
    "dev": "tsx src/index.ts"
  }
}

Testing It

Run the script with your API key set in the environment:

OPENAI_API_KEY=your_key_here npm run dev

You should see a JSON object containing the final messages array. If the graph fails, check three things first: your environment variable name, whether the model name is valid for your account, and whether your state reducer is merging messages correctly.

For intermediate testing, change only one thing at a time: prompt text, node logic, or model settings. That makes it obvious whether a failure came from the graph structure or from the LLM response itself.

If you add more nodes later, use stream() before adding any external tools or persistence. It gives you visibility into each step without needing to attach a debugger.

Next Steps

  • Add a second node for tool calling and test conditional edges locally.
  • Write unit tests around pure node functions before testing full graph execution.
  • Add checkpointing once your local runs are stable and you need resumable conversations.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides