How to Fix 'invalid API key when scaling' in LangGraph (TypeScript)
If you see invalid API key when scaling in a LangGraph TypeScript app, the failure usually means your graph worker or server process is not getting the same OpenAI-compatible credentials as your local dev process. It tends to show up when you move from a single Node process to a scaled setup: Docker, serverless, multiple workers, or a LangGraph deployment where env vars are missing on one instance.
In practice, this is almost always a configuration issue, not a LangGraph bug. The stack trace often bubbles up from the model provider client as an authentication error, then gets wrapped by your graph execution path.
The Most Common Cause
The #1 cause is this: you initialized the model client in one place, but the scaled runtime does not inherit the same OPENAI_API_KEY or provider-specific key.
This happens a lot when:
- •you run locally with
.env - •it works in
npm run dev - •then fails in Docker, Vercel, ECS, k8s, or a second worker
Broken pattern vs fixed pattern
| Broken | Fixed |
|---|---|
| Reads env at module load and assumes it exists everywhere | Passes config explicitly and verifies env at startup |
| Works in one process only | Works across scaled workers |
// broken.ts
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";
const model = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
export const graph = new StateGraph({ /* ... */ });
// Later: compiled graph runs in another process/container
// fixed.ts
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";
function requireEnv(name: string): string {
const value = process.env[name];
if (!value) throw new Error(`Missing required env var: ${name}`);
return value;
}
const model = new ChatOpenAI({
apiKey: requireEnv("OPENAI_API_KEY"),
model: "gpt-4o-mini",
});
export const graph = new StateGraph({ /* ... */ });
The important part is not just “use process.env”. The real fix is to fail early if the key is absent. If you let the app boot without credentials, the error will surface later as something like:
- •
AuthenticationError: Incorrect API key provided - •
Error: invalid_api_key - •
LangGraphRunnableErrorwrapping a provider failure during node execution
When scaling, that late failure makes debugging painful because only some instances are misconfigured.
Other Possible Causes
1) Wrong environment variable name for the provider
If you switched providers or copied code from another project, you may be setting the wrong variable.
// wrong
process.env.OPENAI_API_KEY = process.env.ANTHROPIC_API_KEY;
// right
process.env.OPENAI_API_KEY = process.env.OPENAI_API_KEY;
For Azure OpenAI or other OpenAI-compatible endpoints, check whether your SDK expects:
- •
apiKey - •
azureOpenAIApiKey - •
OPENAI_API_KEY - •provider-specific headers
2) Secret exists locally but not in production runtime
In Docker or Kubernetes, .env files are often ignored unless explicitly mounted.
# k8s snippet
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: llm-secrets
key: openai_api_key
If that secret is missing on one replica, scaling turns into intermittent auth failures.
3) You are creating clients inside a node with stale config
If your graph node builds a client dynamically and closes over undefined state, scaled executions can diverge.
// risky
const makeModel = () =>
new ChatOpenAI({ apiKey: globalThis.apiKeyFromSomewhere });
const node = async () => {
const llm = makeModel();
return llm.invoke("hello");
};
Prefer deterministic initialization:
const apiKey = requireEnv("OPENAI_API_KEY");
const model = new ChatOpenAI({ apiKey });
const node = async () => model.invoke("hello");
4) Mixed local and remote execution paths
A common LangGraph setup is:
- •local graph compilation
- •remote worker execution
- •separate deployment for persistence/checkpointing
If your checkpoint saver or background worker uses different env vars than your API server, one side may authenticate while the other fails.
// example checkpoint config mismatch
const checkpointerUrl = process.env.CHECKPOINTER_URL;
// worker has OPENAI_API_KEY missing even though API server has it
Make sure every runtime component has the same secret set:
- •API server
- •background worker
- •queue consumer
- •cron/job runner
How to Debug It
- •
Print env presence at startup
- •Do not print the full key.
- •Print only whether it exists and its length.
console.log({ hasOpenAIApiKey: Boolean(process.env.OPENAI_API_KEY), openaiApiKeyLength: process.env.OPENAI_API_KEY?.length ?? 0, }); - •
Check which process actually runs the failing node
- •In scaled systems, the request handler may be fine while the worker is broken.
- •Add logs with hostname/container ID before model invocation.
- •
Catch and inspect the original provider error
- •LangGraph often wraps errors coming from
@langchain/openai,openai, or another SDK. - •Look for root causes like
AuthenticationError,401, orinvalid_api_key.
try { await graph.invoke(input); } catch (err) { console.error("Graph failed", err); throw err; } - •LangGraph often wraps errors coming from
- •
Compare local vs deployed env
- •Run the same container image locally with production-like env vars.
- •If it breaks only in deployment, it’s almost always secret injection or runtime scoping.
Prevention
- •Validate all required secrets at boot using a small helper like
requireEnv(). - •Inject keys through deployment secrets management, not checked-in
.envfiles. - •Keep model client creation centralized so every graph node uses the same authenticated instance.
- •Add one integration test that runs your LangGraph workflow with mocked-but-present credentials to catch missing env wiring early.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit