How to Fix 'invalid API key during development' in LangGraph (TypeScript)
What this error actually means
If you’re seeing invalid API key during development in a LangGraph TypeScript app, the runtime is usually telling you that the OpenAI client was created with an empty, undefined, or wrong key. In practice, this shows up when your graph starts executing and the model node tries to call the provider.
Most of the time, this is not a LangGraph bug. It’s an environment/config issue that only becomes visible once the graph invokes a model like ChatOpenAI, ChatAnthropic, or another provider wrapper.
The Most Common Cause
The #1 cause is initializing your model before environment variables are loaded, or reading the wrong env var name. In TypeScript projects, this often happens when process.env.OPENAI_API_KEY is undefined at import time.
Here’s the broken pattern:
// broken.ts
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
And here’s the fixed pattern:
// fixed.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY is missing");
}
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
| Broken | Fixed |
|---|---|
apiKey: process.env.OPENAI_API_KEY with no env loading | import "dotenv/config" before client creation |
| No guard when env var is missing | Explicit runtime check |
| Fails later during graph execution | Fails immediately at startup |
In LangGraph, this usually surfaces as a provider error like:
Error: 401 Incorrect API key provided: undefined
Or from LangChain/OpenAI wrappers:
AuthenticationError: Incorrect API key provided. You can find your API key at ...
If you see undefined, that’s your smoking gun.
Other Possible Causes
1. You’re using the wrong environment variable name
A lot of teams accidentally set LANGCHAIN_API_KEY or OPEN_AI_API_KEY and expect OpenAI to pick it up.
# wrong
OPEN_AI_API_KEY=sk-...
# right
OPENAI_API_KEY=sk-...
If you’re using Anthropic or Azure OpenAI, make sure you’re matching the provider-specific variable names and constructor fields.
2. Your .env file is not loaded in the runtime entrypoint
This happens a lot in monorepos and compiled TypeScript apps. The .env file exists, but your app starts from dist/index.js without loading it.
// wrong
import { buildGraph } from "./graph";
buildGraph();
// right
import "dotenv/config";
import { buildGraph } from "./graph";
buildGraph();
If you use a custom bootstrap file, load env vars there before importing modules that create model clients.
3. You created the model at module scope too early
This is common when building LangGraph nodes in separate files. The module imports run before your test runner or server has injected env vars.
// wrong: graph.ts
import { ChatOpenAI } from "@langchain/openai";
export const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
// right: graph.ts
import { ChatOpenAI } from "@langchain/openai";
export function createLLM() {
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY missing");
}
return new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
}
Then call createLLM() inside your graph factory instead of exporting a singleton.
4. You’re mixing local dev keys with production secrets
A local .env might contain one key, while your deployment platform injects another. If the deployed service uses an old revoked key, you’ll get auth failures even though local tests pass.
# example CI/CD config issue
env:
OPENAI_API_KEY: ${OPENAI_PROD_KEY}
Check for:
- •revoked keys
- •rotated secrets not updated in all environments
- •staging pointing at production provider accounts
5. Your test runner isn’t loading environment variables
Vitest and Jest don’t automatically load .env unless you configure them to.
// vitest.config.ts
import { defineConfig } from "vitest/config";
export default defineConfig({
test: {
setupFiles: ["./test/setup-env.ts"],
},
});
// test/setup-env.ts
import "dotenv/config";
Without this, tests that instantiate LangGraph nodes will fail with auth errors even though your app works manually.
How to Debug It
- •Print the resolved value before creating the client
- •Check whether it’s actually set.
- •Do not log the full secret; log only presence and length.
console.log("OPENAI_API_KEY present:", !!process.env.OPENAI_API_KEY);
console.log("OPENAI_API_KEY length:", process.env.OPENAI_API_KEY?.length ?? 0);
- •Fail fast in one place
- •Add a startup guard in your app entrypoint.
- •This prevents hidden failures inside graph execution.
if (!process.env.OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY missing at startup");
}
- •
Confirm where LangGraph instantiates the model
- •Search for every
new ChatOpenAI(...),new AzureChatOpenAI(...), or provider wrapper. - •Make sure none of them are created before env loading.
- •Search for every
- •
Run outside your framework
- •Execute the same file directly with Node.
- •If it works in one path and fails in another, you’ve got an initialization order problem.
Prevention
- •Load env vars at the top-level entrypoint with
dotenv/config, not deep inside helper files. - •Create provider clients inside factory functions so tests and servers can control initialization order.
- •Add startup validation for required secrets:
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •any Azure/OpenRouter/provider-specific keys
- •
The real fix is simple: make secret loading explicit, validate it early, and stop creating LLM clients before configuration is ready. In LangGraph TypeScript projects, that removes most “invalid API key” failures before they ever reach a node invocation.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit