LangGraph Tutorial (TypeScript): adding observability for intermediate developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-observability-for-intermediate-developerstypescript

This tutorial shows how to add observability to a LangGraph TypeScript app using real tracing hooks, structured state, and a simple event stream. You need this when your graph works locally but you cannot answer basic production questions like which node failed, where latency is spent, or what the model actually saw.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with ts-node or tsx
  • Packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod
    • dotenv
  • An OpenAI API key in .env
  • Optional but useful:
    • LangSmith account for tracing
    • A terminal that can show streamed logs clearly

Step-by-Step

  1. Start with a graph that has explicit state fields you want to inspect later. Observability is much easier when your state is typed and carries metadata like request IDs, node names, and intermediate outputs.
import "dotenv/config";
import { z } from "zod";
import { StateGraph, START, END } from "@langchain/langgraph";

const GraphState = z.object({
  requestId: z.string(),
  input: z.string(),
  draft: z.string().optional(),
  final: z.string().optional(),
});

type GraphStateType = z.infer<typeof GraphState>;

const graph = new StateGraph(GraphState)
  .addNode("draft", async (state: GraphStateType) => ({
    draft: `Draft for ${state.requestId}: ${state.input}`,
  }))
  .addNode("finalize", async (state: GraphStateType) => ({
    final: `${state.draft} -> approved`,
  }))
  .addEdge(START, "draft")
  .addEdge("draft", "finalize")
  .addEdge("finalize", END);
  1. Add a callback handler so every model or chain event is visible while the graph runs. For intermediate developers, this is the fastest way to see what happened without waiting for full tracing infrastructure.
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";

class ConsoleObsHandler extends BaseCallbackHandler {
  name = "console_obs_handler";

  async handleChainStart(chain: any, inputs: any) {
    console.log("[chain:start]", chain.name ?? "unknown", inputs);
  }

  async handleChainEnd(outputs: any) {
    console.log("[chain:end]", outputs);
  }

  async handleChainError(err: Error) {
    console.error("[chain:error]", err.message);
  }
}
  1. Compile the graph with a checkpointer and run it with a config object that carries tags and metadata. Tags make filtering easy later, and the checkpointer lets you inspect state between nodes instead of only seeing the final answer.
import { MemorySaver } from "@langchain/langgraph";

const app = graph.compile({
  checkpointer: new MemorySaver(),
});

const result = await app.invoke(
  { requestId: "req-123", input: "Summarize policy changes" },
  {
    configurable: { thread_id: "req-123" },
    tags: ["observability-demo", "typescript"],
    metadata: { service: "claims-assistant", env: "dev" },
    callbacks: [new ConsoleObsHandler()],
  }
);

console.log("FINAL RESULT:", result);
  1. Stream events when you need node-level visibility during execution. This is the part that helps most in debugging because you can see each step as it happens instead of guessing where time went.
const stream = await app.streamEvents(
  { requestId: "req-456", input: "Explain coverage exclusions" },
  {
    configurable: { thread_id: "req-456" },
    version: "v2",
    tags: ["streaming-observability"],
    metadata: { service: "claims-assistant", env: "dev" },
    callbacks: [new ConsoleObsHandler()],
  }
);

for await (const event of stream) {
  if (event.event === "on_chain_start" || event.event === "on_chain_end") {
    console.log(JSON.stringify(event, null, 2));
  }
}
  1. If you use an LLM inside the graph, attach the same observability pattern to the model call itself. This gives you token usage, latency boundaries, and prompt visibility at the exact point where production issues usually happen.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
});

const response = await llm.invoke(
  [new HumanMessage("Write one sentence about claims triage.")],
  {
    tags: ["llm-observability"],
    metadata: { service: "claims-assistant", env: "dev" },
    callbacks: [new ConsoleObsHandler()],
  }
);

console.log(response.content);

Testing It

Run the script and confirm you see three things in the terminal:

  • chain start/end logs from your callback handler
  • streamed events for each node transition
  • a final output object containing both intermediate and final fields

If you added an LLM node, verify that its callback output appears separately from the graph node output. That separation matters because graph failures and model failures are not the same problem in production.

If nothing shows up, check these first:

  • your .env file is loaded
  • thread_id is present in configurable
  • your callback class methods are spelled correctly
  • you are using streamEvents with version: "v2"

Next Steps

  • Wire this into LangSmith so traces persist across environments.
  • Add per-node timing metrics and export them to Prometheus or OpenTelemetry.
  • Store sanitized state snapshots for audit trails in regulated workflows.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides