How to Fix 'invalid API key when scaling' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-key-when-scalinglangchaintypescript

When LangChain throws invalid API key when scaling, it usually means the SDK is not receiving the key you think it is. In TypeScript projects, this shows up most often after a deploy, during horizontal scaling, or when moving from local .env usage to serverless or containerized runtime.

The error is almost never about LangChain itself. It’s usually an environment variable, initialization order, or build/runtime mismatch.

The Most Common Cause

The #1 cause is reading process.env.OPENAI_API_KEY too early, or creating the LangChain client before your environment variables are loaded.

This is common in TypeScript when you import a module that instantiates ChatOpenAI at file load time. In local dev it may work because .env is loaded early enough. Under scale, cold starts and parallel workers expose the bug.

Broken vs fixed pattern

BrokenFixed
Instantiates client at module scope before env is readyLoads env first, then creates client inside runtime path
Fails in serverless/container startupWorks across cold starts and scaled instances
// broken.ts
import { ChatOpenAI } from "@langchain/openai";

export const llm = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4o-mini",
});

export async function run() {
  const res = await llm.invoke("Hello");
  return res;
}
// fixed.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

export async function run() {
  const apiKey = process.env.OPENAI_API_KEY;

  if (!apiKey) {
    throw new Error("OPENAI_API_KEY is missing");
  }

  const llm = new ChatOpenAI({
    apiKey,
    model: "gpt-4o-mini",
  });

  const res = await llm.invoke("Hello");
  return res;
}

If you’re using Next.js, NestJS, worker queues, or Lambda-style handlers, this matters even more. Module-level singletons are fine only when you know the environment is already initialized.

Other Possible Causes

1) Wrong environment variable name

LangChain’s OpenAI integration expects OPENAI_API_KEY, not API_KEY or LANGCHAIN_API_KEY.

// wrong
new ChatOpenAI({
  apiKey: process.env.API_KEY,
});

// right
new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

If you’re using LangSmith too, don’t confuse the keys:

  • OPENAI_API_KEY → OpenAI model access
  • LANGSMITH_API_KEY → tracing/observability

2) Build-time env injection instead of runtime env injection

In Docker or CI/CD, people often bake the key into build args and expect it to exist at runtime. That breaks when scaling replicas start without the same build context.

# bad idea for runtime secrets
ARG OPENAI_API_KEY
ENV OPENAI_API_KEY=$OPENAI_API_KEY

Use runtime secret injection instead:

# better: read from platform secret store at runtime
ENV NODE_ENV=production

Then inject via Kubernetes Secret, ECS task env, Vercel env vars, or your platform’s secret manager.

3) Import order issues with dotenv

If dotenv/config runs after a file imports ChatOpenAI, your key can be undefined.

// wrong entrypoint order
import "./llm";
import "dotenv/config";

Fix it by loading env first:

// correct entrypoint order
import "dotenv/config";
import "./llm";

Or keep it explicit in your bootstrap file:

import dotenv from "dotenv";
dotenv.config();

4) Multiple processes with inconsistent secrets

This happens with PM2 clusters, Kubernetes rolling deploys, or autoscaling groups. One replica has the updated secret; another still runs with the old one.

Symptoms include:

  • some requests succeed
  • some fail with invalid API key
  • errors appear only under load

Check your deployment config and confirm every replica gets the same secret version.

How to Debug It

  1. Log whether the key exists, not the key itself
console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));
console.log("OPENAI_API_KEY length:", process.env.OPENAI_API_KEY?.length);

If length is undefined, your app never received the variable.

  1. Confirm where the error originates

Look for these class names and messages:

  • ChatOpenAI
  • OpenAiError
  • 401 Unauthorized
  • invalid_api_key
  • Incorrect API key provided

If you see a raw OpenAI-style auth failure, this is not a LangChain prompt issue.

  1. Move instantiation into a request handler

If this fixes it, your problem is initialization timing.

export async function handler() {
  const llm = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
  return llm.invoke("Ping");
}
  1. Test inside the deployed container or function

Don’t trust local .env. Exec into the running pod/container and inspect env presence there.

printenv | grep OPENAI

If nothing shows up in production but works locally, your deployment secret wiring is broken.

Prevention

  • Instantiate LangChain clients after env validation, not at import time.
  • Use explicit startup checks so missing secrets fail fast:
if (!process.env.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY");
}
  • Keep runtime secrets out of build steps. Inject them per environment through your platform’s secret manager.
  • Separate keys by purpose:
    • OpenAI models: OPENAI_API_KEY
    • LangSmith tracing: LANGSMITH_API_KEY

If you want one rule to remember: in scaled TypeScript systems, don’t let module import order decide whether auth works. Validate config at startup, instantiate clients late, and verify secrets in the actual runtime environment.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides