How to Fix 'invalid API key' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-keylangchaintypescript

When LangChain throws invalid API key, it usually means the underlying provider client never received a usable key, or it received the wrong one. In TypeScript, this often shows up when you instantiate ChatOpenAI, OpenAIEmbeddings, or another model wrapper without wiring environment variables correctly.

The annoying part: the error often looks like an OpenAI problem, but the root cause is usually your app config, not LangChain itself.

The Most Common Cause

The #1 cause is passing the wrong environment variable name, or not loading .env before creating the LangChain client.

With LangChain JS/TS, ChatOpenAI expects an OpenAI key via OPENAI_API_KEY unless you explicitly pass apiKey. If your code uses API_KEY, OPENAI_KEY, or loads env vars too late, you’ll get errors like:

  • Error: 401 Incorrect API key provided
  • OpenAIError: The OPENAI_API_KEY environment variable is missing
  • AuthenticationError: invalid_api_key

Broken vs fixed

Broken patternFixed pattern
Uses wrong env var nameUses OPENAI_API_KEY
Loads dotenv after client creationLoads dotenv before client creation
Relies on implicit env statePasses key explicitly when needed
// ❌ broken.ts
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.API_KEY, // wrong env var
  model: "gpt-4o-mini",
});

const res = await model.invoke("Hello");
console.log(res.content);
// ✅ fixed.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4o-mini",
});

const res = await model.invoke("Hello");
console.log(res.content);

If you prefer implicit loading, make sure your process has this set before runtime:

OPENAI_API_KEY=sk-...

Other Possible Causes

1. You’re using the wrong provider key for the class

ChatOpenAI needs an OpenAI key. If you copied a key from Anthropic, Azure OpenAI, or another provider and dropped it into OPENAI_API_KEY, LangChain will still pass it through and the provider will reject it.

// ❌ wrong provider key for ChatOpenAI
new ChatOpenAI({
  apiKey: process.env.ANTHROPIC_API_KEY,
});
// ✅ correct provider-specific client
new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

2. .env is not being loaded in your runtime

This happens a lot in Next.js, Vitest, tsx, Docker, and serverless deployments. Your local shell has the variable, but the Node process running LangChain does not.

// ❌ no dotenv import; process.env may be empty at runtime
const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
// ✅ load env early
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

If you’re on Next.js, use:

# .env.local
OPENAI_API_KEY=sk-...

3. You created the client before setting env vars

Order matters. If your app mutates process.env after importing modules that instantiate LangChain clients, those clients may capture undefined.

// ❌ config.ts
export const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
// ❌ main.ts
import "./config";
process.env.OPENAI_API_KEY = "sk-..."; // too late

Fix by setting env vars before startup, not inside app code.

4. You’re using an old package/API mismatch

LangChain JS has changed package boundaries over time. Mixing old imports with newer packages can produce confusing auth failures because you think you’re configuring one client while actually calling another.

// ❌ mixed/old style in a modern codebase
import { OpenAIApi } from "openai";
import { OpenAI } from "langchain/llms/openai";
// ✅ modern package usage
import { ChatOpenAI } from "@langchain/openai";

Also verify package versions:

npm ls langchain @langchain/openai openai

How to Debug It

  1. Print the exact value being passed

    • Log whether the key exists, but never log the full secret.
    • You want to know if it’s undefined, empty, or malformed.
    console.log("has key:", Boolean(process.env.OPENAI_API_KEY));
    console.log("key prefix:", process.env.OPENAI_API_KEY?.slice(0, 3));
    
  2. Check which class is throwing

    • If it’s ChatOpenAI, inspect OpenAI auth.
    • If it’s AzureChatOpenAI, check Azure endpoint and deployment settings.
    • If it’s embeddings-related, inspect OpenAIEmbeddings.
  3. Verify runtime environment

    • Run the same command your app uses in production.
    • For Docker or serverless:
      • confirm env injection in deployment config
      • confirm secrets are available at container start
  4. Isolate LangChain from your app

    • Create a minimal script that only loads dotenv and calls one model.
    • If that works, your bug is in app startup order or framework config.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

async function main() {
  const model = new ChatOpenAI({ model: "gpt-4o-mini" });
  const result = await model.invoke("Say hello");
  console.log(result.content);
}

main().catch(console.error);

Prevention

  • Set a single standard env var name per provider and document it in .env.example.
  • Load configuration before any LangChain imports that create clients.
  • Add a startup check that fails fast when required secrets are missing.
if (!process.env.OPENAI_API_KEY) {
  throw new Error("Missing OPENAI_API_KEY");
}

For TypeScript projects in production, treat API keys like any other critical dependency: validate them at boot, keep provider classes aligned with their keys, and don’t let module import order decide whether auth works.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides