LangChain Tutorial (TypeScript): adding observability for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-observability-for-beginnerstypescript

This tutorial shows you how to add observability to a LangChain TypeScript app using LangSmith, so you can inspect prompts, outputs, latency, and failures from real runs. If you’re building agents or chains for production, this is how you stop guessing and start seeing what the model actually did.

What You'll Need

  • Node.js 18+
  • A TypeScript project with npm or pnpm
  • These packages:
    • langchain
    • @langchain/openai
    • @langchain/core
    • dotenv
  • An OpenAI API key
  • A LangSmith account and API key
  • These environment variables:
    • OPENAI_API_KEY
    • LANGCHAIN_TRACING_V2=true
    • LANGCHAIN_API_KEY
    • LANGCHAIN_PROJECT

Step-by-Step

  1. First install the dependencies and make sure your project can talk to OpenAI and LangSmith. This gives you the runtime pieces for both the chain itself and the tracing backend.
npm install langchain @langchain/openai @langchain/core dotenv
  1. Add your environment variables in a .env file. LangSmith tracing is enabled by env vars, so you do not need extra instrumentation code for basic tracing.
OPENAI_API_KEY=your_openai_key_here
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your_langsmith_key_here
LANGCHAIN_PROJECT=langchain-ts-observability-demo
  1. Create a simple chain that uses a prompt template and an OpenAI chat model. The important part is that this code runs normally while LangSmith automatically captures the inputs, outputs, and timing.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise assistant."],
  ["human", "Explain observability in one sentence for a developer."],
]);

const chain = prompt.pipe(model).pipe(StringOutputParser.from());

const result = await chain.invoke({});
console.log(result);
  1. Run the script with TypeScript support. If your project already uses tsx, use that; otherwise compile it however your repo is set up. The key is that the run must happen with the env vars loaded so LangSmith receives traces.
npx tsx src/index.ts
  1. Add a second call with different input so you can compare runs in LangSmith. This helps you verify that tracing is working across multiple executions, not just one happy-path request.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise assistant."],
  ["human", "{question}"],
]);

const chain = prompt.pipe(model).pipe(StringOutputParser.from());

const first = await chain.invoke({
  question: "What is observability in one sentence?",
});
const second = await chain.invoke({
  question: "Why do teams trace LLM calls?",
});

console.log({ first, second });
  1. If you want cleaner traces for debugging later, tag your run metadata at invocation time. Tags and metadata make it much easier to filter traces when your app has multiple chains or environments.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise assistant."],
  ["human", "{question}"],
]);

const chain = prompt.pipe(model).pipe(StringOutputParser.from());

const answer = await chain.invoke(
  { question: "Give me one benefit of tracing LLM apps." },
  {
    tags: ["tutorial", "observability", "beginner"],
    metadata: {
      service: "support-bot",
      environment: "local",
    },
  }
);

console.log(answer);

Testing It

Run the script once, then open LangSmith and check the project name you set in LANGCHAIN_PROJECT. You should see each invoke() call as a trace with the prompt, model response, latency, and any metadata or tags you passed.

If nothing appears, check these first:

  • LANGCHAIN_TRACING_V2 is exactly true
  • LANGCHAIN_API_KEY is valid
  • Your app process actually loaded .env
  • The machine has outbound network access

If you see traces but no child spans, that usually means your code is only wrapping a single runnable. Once you add tools, retrievers, or multi-step chains, LangSmith will show those nested operations too.

Next Steps

  • Add tracing to an agent with tools so you can inspect tool calls step by step.
  • Learn how to attach custom callbacks for application-specific metrics.
  • Use metadata fields like tenant ID or request ID to trace production traffic by customer or session.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides