LangChain Tutorial (TypeScript): adding observability for advanced developers

By Cyprian AaronsUpdated 2026-04-21
langchainadding-observability-for-advanced-developerstypescript

This tutorial shows you how to add real observability to a LangChain TypeScript app using LangSmith tracing, custom callbacks, and structured metadata. You need this when your chain works locally but you still can’t answer basic production questions like: where time is spent, which tool calls fail, and what a bad user request looked like end-to-end.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • Packages:
    • langchain
    • @langchain/openai
    • @langchain/core
    • dotenv
  • An OpenAI API key
  • A LangSmith account
  • A LangSmith API key
  • These environment variables:
    • OPENAI_API_KEY
    • LANGCHAIN_TRACING_V2=true
    • LANGCHAIN_API_KEY
    • LANGCHAIN_PROJECT=your-project-name

Step-by-Step

  1. Start with a minimal chain that already emits useful metadata. The point is not just to run an LLM call, but to make every request traceable by tenant, workflow, and environment.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise support assistant."],
  ["user", "{question}"],
]);

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const chain = RunnableSequence.from([
  prompt,
  model,
  new StringOutputParser(),
]).withConfig({
  tags: ["support-bot", "typescript"],
  metadata: {
    service: "customer-support",
    environment: process.env.NODE_ENV ?? "development",
    tenantId: "tenant_123",
  },
});

const result = await chain.invoke({
  question: "How do I reset my password?",
});

console.log(result);
  1. Turn on LangSmith tracing before the first invocation. This gives you spans for the prompt, model call, and parser without changing your business logic.
import "dotenv/config";

process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = process.env.LANGCHAIN_PROJECT ?? "support-observability-demo";
process.env.LANGCHAIN_API_KEY = process.env.LANGCHAIN_API_KEY ?? "";
process.env.OPENAI_API_KEY = process.env.OPENAI_API_KEY ?? "";

if (!process.env.LANGCHAIN_API_KEY) {
  throw new Error("Missing LANGCHAIN_API_KEY");
}
if (!process.env.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY");
}
  1. Add custom callbacks for timing and failure visibility. LangSmith gives you traces, but production debugging gets much easier when you also log latency and errors in your own format.
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";

class ObservabilityCallback extends BaseCallbackHandler {
  name = "observability-callback";
  private startedAt = new Map<string, number>();

  async handleChainStart(_: unknown, __: unknown, runId: string) {
    this.startedAt.set(runId, Date.now());
    console.log(JSON.stringify({ event: "chain_start", runId }));
  }

  async handleChainEnd(_: unknown, __: unknown, runId: string) {
    const started = this.startedAt.get(runId);
    const durationMs = started ? Date.now() - started : undefined;
    console.log(JSON.stringify({ event: "chain_end", runId, durationMs }));
    this.startedAt.delete(runId);
  }

  async handleChainError(err: Error, __: unknown, runId: string) {
    console.log(JSON.stringify({ event: "chain_error", runId, error: err.message }));
    this.startedAt.delete(runId);
  }
}
  1. Wire the callback into the runnable config and invoke the chain with request-specific context. This is where observability becomes useful in practice because every request carries enough data to correlate logs with traces.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise support assistant."],
  ["user", "{question}"],
]);

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const chain = RunnableSequence.from([prompt, model, new StringOutputParser()]).withConfig({
  tags: ["support-bot"],
});

const callbackHandler = new ObservabilityCallback();

const response = await chain.invoke(
  { question: "My invoice is wrong. What should I check?" },
  {
    callbacks: [callbackHandler],
    metadata: {
      requestId: crypto.randomUUID(),
      userTier: "enterprise",
      route: "/api/support/chat",
    },
    tags: ["prod"],
  }
);

console.log(response);
  1. Trace tool usage too if your chain calls external systems. In advanced apps, failures usually happen in retrieval or tools, not in the final LLM response.
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const lookupPolicyStatus = tool(
  async ({ policyNumber }) => {
    return JSON.stringify({
      policyNumber,
      status: "active",
      updatedAt: new Date().toISOString(),
    });
  },
  {
    name: "lookup_policy_status",
    description: "Fetch policy status by policy number.",
    schema: z.object({
      policyNumber: z.string().min(5),
    }),
  }
);

const policyResult = await lookupPolicyStatus.invoke(
  { policyNumber: "POL-10293" },
  {
    callbacks: [callbackHandler],
    metadata: { requestId: crypto.randomUUID(), sourceSystem: "policy-core" },
    tags: ["tool-call"],
  }
);

console.log(policyResult);

Testing It

Run the script once with valid API keys and check two places. First, confirm your terminal shows structured JSON logs for start/end events and any errors. Second, open LangSmith and verify that the run contains the prompt, model invocation, tags, metadata, and nested spans for each step.

If something is missing in LangSmith, it usually means tracing was enabled too late or the environment variables were not set before importing/creating the runnable. If the callback logs appear but no trace shows up, inspect LANGCHAIN_API_KEY, LANGCHAIN_PROJECT, and whether your network can reach LangSmith.

A good test is to intentionally break one input path. For example, pass an invalid tool argument or force a bad OpenAI key; you should see both local error logs and a failed trace in LangSmith with enough context to reproduce it.

Next Steps

  • Add span-level metadata for customer IDs, workflow names, and deployment version.
  • Export traces into your incident workflow so support engineers can jump from alert to exact failing request.
  • Learn how to use LangGraph with tracing for multi-step agent workflows and retries.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides