LangGraph Tutorial (TypeScript): adding observability for beginners

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-observability-for-beginnerstypescript

This tutorial shows you how to add observability to a LangGraph TypeScript app using LangSmith, so you can trace node execution, inspect inputs and outputs, and debug failures without guessing. If your graph has multiple nodes, conditional branches, or tool calls, observability is the difference between “it works locally” and “I can explain every run in production.”

What You'll Need

  • Node.js 18+
  • A TypeScript project with langgraph installed
  • A LangSmith account
  • LANGSMITH_API_KEY
  • LANGSMITH_TRACING=true
  • LANGSMITH_PROJECT set to a project name
  • An LLM API key if your graph calls a model, such as OPENAI_API_KEY
  • Optional but recommended:
    • dotenv for local env loading
    • @langchain/openai for a real model node

Step-by-Step

  1. Start by installing the packages you need. For this example, I’m using a small graph with an OpenAI chat model so you can see real traces in LangSmith.
npm install langgraph @langchain/openai @langchain/core dotenv
npm install -D typescript tsx @types/node
  1. Set up your environment variables. LangSmith tracing is enabled through env vars, so you do not need special instrumentation code for the basics.
# .env
OPENAI_API_KEY=your_openai_key
LANGSMITH_API_KEY=your_langsmith_key
LANGSMITH_TRACING=true
LANGSMITH_PROJECT=langgraph-observability-demo
  1. Create a simple graph with two nodes: one generates a draft answer, the other formats it. This is enough to show how each node appears as a separate trace in LangSmith.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, StateGraph, START, END } from "langgraph";

const State = Annotation.Root({
  question: Annotation<string>(),
  draft: Annotation<string>(),
  answer: Annotation<string>(),
});

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

async function draftNode(state: typeof State.State) {
  const res = await model.invoke(
    `Answer briefly: ${state.question}`
  );
  return { draft: res.content.toString() };
}
  1. Add a second node and connect the graph. The point here is not the prompt itself; it is making the execution path visible so you can inspect state at each step.
async function formatNode(state: typeof State.State) {
  return {
    answer: `Final answer:\n${state.draft}`,
  };
}

const graph = new StateGraph(State)
  .addNode("draft", draftNode)
  .addNode("format", formatNode)
  .addEdge(START, "draft")
  .addEdge("draft", "format")
  .addEdge("format", END)
  .compile();
  1. Run the graph with a sample input and print the result locally. When tracing is enabled, this single call will also show up in LangSmith as a run with nested node executions.
const result = await graph.invoke({
  question: "What is LangGraph observability?",
});

console.log(result.answer);
  1. If you want better debugging later, add metadata to your runs and keep your node names stable. Stable names make it easier to search traces when graphs grow beyond a few nodes.
const tracedResult = await graph.invoke(
  { question: "Why use tracing?" },
  {
    configurable: {
      thread_id: "demo-thread-1",
    },
    metadata: {
      app: "langgraph-observability-demo",
      environment: "local",
    },
  }
);

console.log(tracedResult.answer);

Testing It

Run your file with tsx or compile it with TypeScript and execute the output. If everything is wired correctly, you should see the final answer in your terminal and a matching trace in LangSmith under the project name from LANGSMITH_PROJECT.

In LangSmith, open the run and check that each node appears separately. You should be able to inspect the input state going into draft, the LLM response coming out of it, and the transformed output from format.

If you do not see traces, verify these first:

  • LANGSMITH_TRACING=true
  • LANGSMITH_API_KEY is valid
  • Your .env file is loaded before importing or invoking the graph

A good sanity check is to intentionally break one node and confirm that the error shows up in both your terminal and the trace viewer.

Next Steps

  • Add custom tags and metadata per request so you can filter traces by customer, tenant, or workflow type.
  • Learn how to trace tool calls and branching graphs so you can debug multi-step agent behavior.
  • Move from local env vars to production secret management with AWS Secrets Manager, Vault, or your platform’s secret store

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides