How to Fix 'invalid API key' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-keylanggraphtypescript

What the error means

invalid API key usually means the request reached the model provider, but the key LangGraph passed down was empty, malformed, or not accepted by that provider. In TypeScript projects, this often shows up when you wire ChatOpenAI, AzureChatOpenAI, or another model client into a LangGraph graph and the environment variable is missing or loaded too late.

The exact failure often looks like one of these:

  • Error: 401 Unauthorized: Invalid API key
  • OpenAIError: Incorrect API key provided
  • AuthenticationError: invalid_api_key
  • BadRequestError: 401 Invalid API key

The Most Common Cause

The #1 cause is simple: your model client is being created before the environment variable is loaded, or the variable name is wrong.

With LangGraph in TypeScript, the graph itself is usually fine. The failure happens inside the underlying model class, such as ChatOpenAI, because it receives undefined instead of a real key.

Broken pattern vs fixed pattern

BrokenFixed
Loads env after client creationLoads env before client creation
Uses wrong variable nameUses correct provider variable
Hides missing key until runtimeFails fast if key is absent
// broken.ts
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";
import "dotenv/config"; // too late if other imports instantiate clients elsewhere

const llm = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY, // may be undefined here
  model: "gpt-4o-mini",
});

const graph = new StateGraph({ /* ... */ });
// fixed.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";

const apiKey = process.env.OPENAI_API_KEY;

if (!apiKey) {
  throw new Error("Missing OPENAI_API_KEY");
}

const llm = new ChatOpenAI({
  apiKey,
  model: "gpt-4o-mini",
});

const graph = new StateGraph({ /* ... */ });

If you are using ESM and a config loader, make sure it runs before anything instantiates a model client.

// index.ts
import { config } from "dotenv";
config();

import "./app.js";

Other Possible Causes

1) Wrong environment variable for the provider

OpenAI, Azure OpenAI, Anthropic, and Google all use different auth fields. A common mistake is setting LANGCHAIN_API_KEY and expecting it to authenticate the model call.

// wrong
new ChatOpenAI({
  apiKey: process.env.LANGCHAIN_API_KEY,
});
// right
new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

For Azure OpenAI, you typically need Azure-specific settings:

import { AzureChatOpenAI } from "@langchain/openai";

const llm = new AzureChatOpenAI({
  azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
  azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_INSTANCE_NAME,
  azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME,
  azureOpenAIApiVersion: "2024-02-15-preview",
});

2) Key has extra whitespace or quotes

This happens a lot when copying from .env, CI secrets, or shell exports.

OPENAI_API_KEY="sk-proj-abc123 "

That trailing space can break authentication.

Use trimming at load time if your secret source is messy:

const apiKey = process.env.OPENAI_API_KEY?.trim();

3) You are using the wrong package version combo

LangGraph depends on LangChain integrations behaving a certain way. If your versions are mismatched, you may pass config in a shape your installed client no longer accepts.

Check for stale packages:

{
  "dependencies": {
    "@langchain/langgraph": "^0.2.0",
    "@langchain/openai": "^0.5.0",
    "langchain": "^0.3.0"
  }
}

If one package is pinned far behind the others, update them together.

4) Serverless runtime does not expose env vars where you think it does

In Vercel, Cloudflare Workers, AWS Lambda, and Docker, the secret may exist in one environment but not in another. Locally it works; deployed it fails with invalid API key.

Example Docker mistake:

# broken if .env never gets copied or injected at runtime
ENV OPENAI_API_KEY=""

Better:

# inject at runtime via platform secrets
ENV NODE_ENV=production

And verify in deployment logs that OPENAI_API_KEY is actually present.

How to Debug It

  1. Print only presence, not the secret

    console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));
    

    If this prints false, stop looking at LangGraph. The issue is config loading.

  2. Instantiate the model outside the graph and test it directly

    import { ChatOpenAI } from "@langchain/openai";
    
    const llm = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY!, model: "gpt-4o-mini" });
    const res = await llm.invoke("ping");
    console.log(res.content);
    

    If this fails with AuthenticationError, your graph wiring is not the problem.

  3. Check which class is throwing Look for stack traces containing:

    • ChatOpenAI
    • AzureChatOpenAI
    • BaseChatModel
    • provider SDK errors like AuthenticationError or BadRequestError

    If the exception originates in one of those classes, LangGraph is just surfacing it.

  4. Validate env loading order Make sure this happens before any imports that create clients:

    import "dotenv/config";
    import "./graph.js";
    

    If you create singletons at module scope before dotenv runs, you will get an empty key even though .env exists.

Prevention

  • Fail fast on startup if required keys are missing.

    if (!process.env.OPENAI_API_KEY) throw new Error("Missing OPENAI_API_KEY");
    
  • Keep provider credentials explicit. Do not assume LangGraph will infer auth from unrelated variables like LANGCHAIN_API_KEY.

  • Add a startup health check that calls the model once. That catches bad secrets before users hit your graph path.

If you want fewer production surprises, treat API keys like schema validation: verify them at boot, trim them, and keep provider-specific config isolated per integration class.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides