LangChain Tutorial (TypeScript): adding observability for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-observability-for-intermediate-developerstypescript

This tutorial shows you how to add observability to a LangChain TypeScript app so you can trace prompts, model calls, tool usage, and failures end to end. You need this when your chain works in local tests but becomes hard to debug once it runs inside a real app with retries, branching logic, and multiple LLM calls.

What You'll Need

  • Node.js 18+ installed
  • A TypeScript project with ts-node or a build step already working
  • These packages:
    • langchain
    • @langchain/openai
    • @langchain/core
    • zod
    • dotenv
  • An OpenAI API key
  • A LangSmith account and API key for tracing
  • Environment variables set for:
    • OPENAI_API_KEY
    • LANGCHAIN_TRACING_V2=true
    • LANGCHAIN_API_KEY
    • LANGCHAIN_PROJECT

Step-by-Step

  1. Start with a clean TypeScript project and install the packages you need. LangSmith tracing is enabled through environment variables, so observability starts before you write any chain code.
npm init -y
npm install langchain @langchain/openai @langchain/core zod dotenv
npm install -D typescript ts-node @types/node
  1. Add your environment variables in a .env file. The important part is turning on tracing and giving your project a name so traces are grouped correctly in LangSmith.
OPENAI_API_KEY=your_openai_key
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your_langsmith_key
LANGCHAIN_PROJECT=langchain-ts-observability-tutorial
  1. Build a small chain that includes a tool call, because that is where observability pays off fast. When something fails, you want to see the prompt, the model decision, and the tool input/output in one trace.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { RunnableSequence } from "@langchain/core/runnables";

const weatherTool = new DynamicStructuredTool({
  name: "get_weather",
  description: "Get the weather for a city",
  schema: z.object({ city: z.string() }),
  func: async ({ city }) => `The weather in ${city} is sunny and 24C`,
});

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
  1. Wrap your steps in a runnable sequence and pass tags plus metadata at invocation time. That gives you trace context without hardcoding environment-specific details into the chain itself.
const chain = RunnableSequence.from([
  async (input: { question: string }) => ({
    question: input.question,
    weather: await weatherTool.invoke({ city: "London" }),
  }),
  (input) =>
    llm.invoke([
      {
        role: "system",
        content:
          "Answer using the weather result and keep it short.",
      },
      {
        role: "user",
        content: `Question: ${input.question}\nWeather: ${input.weather}`,
      },
    ]),
]);

const result = await chain.invoke(
  { question: "Should I carry an umbrella?" },
  {
    tags: ["tutorial", "observability"],
    metadata: { userId: "demo-user-123", featureFlag: "weather-helper" },
    runName: "weather-advice-chain",
  }
);

console.log(result.content);
  1. If you want cleaner traces, add explicit names to each step and keep inputs small. This makes it easier to find where latency or bad outputs are coming from when your chain grows beyond one or two nodes.
import { RunnableLambda } from "@langchain/core/runnables";

const fetchWeather = RunnableLambda.from(async (input: { question: string }) => ({
  question: input.question,
  weather: await weatherTool.invoke({ city: "London" }),
})).withConfig({ runName: "fetch-weather" });

const answerUser = RunnableLambda.from(async (input: {
  question: string;
  weather: string;
}) =>
  llm.invoke([
    { role: "system", content: "Answer using the weather result." },
    {
      role: "user",
      content: `${input.question}\n${input.weather}`,
    },
  ])
).withConfig({ runName: "answer-user" });
  1. Put it together in an executable file and run it once. After that, open LangSmith and inspect the trace tree to confirm each step was captured with the tags and metadata you passed in.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables";
import { z } from "zod";

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const weatherTool = new DynamicStructuredTool({
  name: "get_weather",
  description: "Get the weather for a city",
  schema: z.object({ city: z.string() }),
  func: async ({ city }) => `The weather in ${city} is sunny and 24C`,
});

const fetchWeather = RunnableLambda.from(async (input) => ({
  question: input.question,
  weatherTextFromTool:
    await weatherTool.invoke({ cityToCheck =? "" }),
}));

Testing It

Run your script with npx ts-node your-file.ts, then check that it prints an answer without throwing. Next, open LangSmith and confirm you can see a trace for the run name weather-advice-chain, plus child spans for each step.

You should also verify that your tags appear on the run and that metadata like userId is attached. If something breaks, LangSmith should show whether the failure happened in the tool call or in the LLM call instead of leaving you with one opaque stack trace.

A good test is to intentionally change the tool input or set an invalid API key once, then inspect how much easier debugging becomes with traces enabled.

Next Steps

  • Add custom callbacks for logging token usage and latency into your own metrics stack.
  • Learn how to trace agents with multiple tools so you can inspect planning steps, not just final outputs.
  • Connect LangSmith datasets to regression tests so prompt changes don’t silently break behavior.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides