LangGraph Tutorial (TypeScript): adding observability for advanced developers

By Cyprian AaronsUpdated 2026-04-22
langgraphadding-observability-for-advanced-developerstypescript

This tutorial shows how to add real observability to a LangGraph TypeScript agent using LangSmith tracing, structured metadata, and node-level instrumentation. You need this when your graph is doing more than a single LLM call and you want to debug routing, latency, failures, and state transitions without guessing.

What You'll Need

  • Node.js 18+
  • TypeScript 5+
  • A LangGraph TypeScript project
  • Packages:
    • @langchain/langgraph
    • @langchain/core
    • @langchain/openai
    • dotenv
  • An OpenAI API key
  • A LangSmith API key
  • These environment variables:
    • OPENAI_API_KEY
    • LANGCHAIN_API_KEY
    • LANGCHAIN_TRACING_V2=true
    • LANGCHAIN_PROJECT=langgraph-observability-ts

Step-by-Step

  1. Start with a graph that already has clear state boundaries. Observability is only useful if your state tells you what changed at each node, so keep the schema explicit.
import "dotenv/config";
import { z } from "zod";
import { StateGraph, START, END } from "@langchain/langgraph";

const GraphState = z.object({
  input: z.string(),
  route: z.string().optional(),
  answer: z.string().optional(),
});

type GraphStateType = z.infer<typeof GraphState>;
  1. Add a model call with metadata and tags so traces are searchable in LangSmith. The trick is to attach context at the runnable level, not just the graph level.
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
}).withConfig({
  runName: "answer-generator",
  tags: ["obs-demo", "typescript", "langgraph"],
  metadata: {
    service: "claims-assistant",
    team: "platform-ai",
    environment: process.env.NODE_ENV ?? "dev",
  },
});
  1. Build nodes that return partial state updates and keep them small. Each node should represent one observable unit of work so traces map cleanly to business logic.
async function routeNode(state: GraphStateType): Promise<Partial<GraphStateType>> {
  const route = state.input.toLowerCase().includes("refund") ? "billing" : "general";
  return { route };
}

async function answerNode(state: GraphStateType): Promise<Partial<GraphStateType>> {
  const prompt = [
    { role: "system" as const, content: "You are a concise support assistant." },
    { role: "user" as const, content: `Route=${state.route}; Question=${state.input}` },
  ];

  const response = await model.invoke(prompt);
  return { answer: response.content.toString() };
}
  1. Wire the graph with node names that will show up in traces exactly as you expect. Good names matter because observability tools are only as useful as your ability to search them later.
const graph = new StateGraph(GraphState)
  .addNode("route", routeNode)
  .addNode("answer", answerNode)
  .addEdge(START, "route")
  .addEdge("route", "answer")
  .addEdge("answer", END)
  .compile()
  .withConfig({
    runName: "claims-support-graph",
    tags: ["obs-demo", "graph"],
    metadata: {
      app: "support-agent",
      version: "1.0.0",
    },
  });
  1. Invoke the graph with a traceable input payload and inspect the returned state. For production debugging, this is where you’d also pass request IDs or tenant IDs into metadata.
async function main() {
  const result = await graph.invoke({
    input: "I need a refund for my last payment",
  });

  console.log(JSON.stringify(result, null, 2));
}

main().catch((error) => {
  console.error(error);
  process.exit(1);
});
  1. If you want deeper visibility on individual calls, wrap critical functions with explicit config. This gives you node-level breadcrumbs when one branch becomes expensive or flaky.
async function instrumentedAnswerNode(
state: GraphStateType
): Promise<Partial<GraphStateType>> {
  const response = await model.withConfig({
    runName: "llm-answer-call",
    tags: ["llm", "critical-path"],
    metadata: { node: "answer" },
}).invoke([
    { role: "user", content: state.input },
]);

return { answer: response.content.toString() };
}

Testing It

Run the script with your environment variables set and confirm it returns a final JSON object containing route and answer. Then open LangSmith and look for the project name langgraph-observability-ts; you should see the graph run, each node span, and the nested model call.

Check that tags like obs-demo and metadata like service=claims-assistant are searchable in the trace UI. If traces do not appear, verify that LANGCHAIN_TRACING_V2=true is exported in the same shell session before running Node.

If you want to test failure visibility, temporarily break the OpenAI key or force an exception inside one node. You should see the failing span highlighted in LangSmith instead of just a generic process crash.

Next Steps

  • Add custom callbacks for metrics export to OpenTelemetry or Datadog.
  • Pass request-scoped metadata through HTTP handlers into graph.invoke(...).
  • Split your graph into subgraphs so traces mirror domain boundaries like underwriting, claims intake, or fraud review.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides