LangChain Tutorial (TypeScript): adding cost tracking for beginners

By Cyprian AaronsUpdated 2026-04-21
langchainadding-cost-tracking-for-beginnerstypescript

This tutorial shows you how to add per-request cost tracking to a LangChain TypeScript app using real model usage data. You need this when you want to know what each prompt costs, alert on expensive requests, or tag AI spend back to users, teams, or workflows.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or tsx
  • langchain
  • @langchain/openai
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with ChatOpenAI, RunnableSequence, and async/await

Install the packages:

npm install langchain @langchain/openai
npm install -D typescript tsx @types/node

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start with a normal LangChain chat call. The important part is that LangChain exposes token usage on the response metadata, which we can turn into cost later.
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const response = await model.invoke("Write one sentence about risk management.");

console.log(response.content);
console.log(response.response_metadata);
  1. Define a small pricing table for the models you use. Keep this in code at first so the logic is explicit and easy to test.
type ModelPricing = {
  inputPer1M: number;
  outputPer1M: number;
};

const PRICING: Record<string, ModelPricing> = {
  "gpt-4o-mini": {
    inputPer1M: 0.15,
    outputPer1M: 0.6,
  },
};

function calculateCostUsd(
  modelName: string,
  inputTokens: number,
  outputTokens: number
): number {
  const pricing = PRICING[modelName];
  if (!pricing) throw new Error(`No pricing configured for ${modelName}`);

  const inputCost = (inputTokens / 1_000_000) * pricing.inputPer1M;
  const outputCost = (outputTokens / 1_000_000) * pricing.outputPer1M;

  return inputCost + outputCost;
}
  1. Wrap your model call so you can extract token counts and compute cost in one place. This keeps cost tracking out of your business logic.
import { ChatOpenAI } from "@langchain/openai";

const modelName = "gpt-4o-mini";
const model = new ChatOpenAI({
  model: modelName,
  temperature: 0,
});

async function trackedInvoke(prompt: string) {
  const result = await model.invoke(prompt);

  const usage = result.response_metadata?.tokenUsage ?? {};
  const inputTokens = usage.promptTokens ?? usage.input_tokens ?? 0;
  const outputTokens = usage.completionTokens ?? usage.output_tokens ?? 0;

  const costUsd = calculateCostUsd(modelName, inputTokens, outputTokens);

  return {
    text: result.content.toString(),
    inputTokens,
    outputTokens,
    costUsd,
  };
}
  1. Use the wrapper in your app and log structured metrics. In production, this is where you would send the numbers to Datadog, CloudWatch, Prometheus, or your internal ledger.
async function main() {
  const result = await trackedInvoke(
    "Summarize why claims triage needs human review."
  );

  console.log(JSON.stringify(result, null, 2));
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});
  1. If you use chains, attach tracking at the boundary instead of inside every step. That gives you one cost record per request even if the chain has multiple prompts.
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatOpenAI } from "@langchain/openai";

const chainModel = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a concise insurance assistant."],
  ["human", "{question}"],
]);

const chain = RunnableSequence.from([
  prompt,
  chainModel,
]);

async function runTrackedChain(question: string) {
    const result = await chainModel.invoke(await prompt.formatMessages({ question }));
    const usage = result.response_metadata?.tokenUsage ?? {};
    const inputTokens = usage.promptTokens ?? usage.input_tokens ?? 0;
    const outputTokens = usage.completionTokens ?? usage.output_tokens ?? 0;

    return {
      answer: result.content.toString(),
      costUsd: calculateCostUsd("gpt-4o-mini", inputTokens, outputTokens),
    };
}
  1. Add a simple guardrail for missing token data. Some providers or configurations may not return usage fields consistently, so fail loudly during development and degrade gracefully in production.
function getTokenCount(value: unknown): number {
if (typeof value === "number") return value;
return Number(value ?? 0);
}

function extractUsage(metadata: any) {
const usage = metadata?.tokenUsage ?? metadata?.usage_metadata ?? {};
return {
inputTokens: getTokenCount(usage.promptTokens ?? usage.input_tokens),
outputTokens: getTokenCount(usage.completionTokens ?? usage.output_tokens),
};
}

Testing It

Run the script with a simple prompt and confirm you get three things back: text, token counts, and a dollar amount. If inputTokens and outputTokens are both zero, inspect response_metadata because provider field names can vary by SDK version.

Try two different prompts of different lengths and compare the numbers. The longer prompt should usually produce higher input-token cost, while longer answers increase output-token cost.

If you're using chains, verify that you only record one final cost per user request unless you intentionally want step-level billing. For finance or insurance workflows, request-level accounting is usually the cleaner default.

Next Steps

  • Move pricing into config or a database table so finance can update rates without redeploying code.
  • Add request IDs and tenant IDs to your logs so you can attribute spend by customer or business unit.
  • Send token and cost metrics to your observability stack instead of printing them to stdout.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides