How to Fix 'invalid API key in production' in LangGraph (TypeScript)
If you’re seeing invalid API key in production in a LangGraph TypeScript app, the problem is usually not LangGraph itself. It means one of your graph nodes is calling an LLM or tool provider with a missing, wrong, or non-production key at runtime.
This shows up most often after a local app works fine, then fails once deployed to Vercel, Docker, ECS, Cloud Run, or a serverless worker.
The Most Common Cause
The #1 cause is hardcoding or reading the API key at module load time instead of inside the runtime environment where the graph actually executes.
In LangGraph, your graph code may be bundled once, but executed later in a different process. If you do this wrong, the key can be undefined, stale, or replaced by a dev key that never made it into production.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Reads env too early | Reads env at runtime |
Hardcodes .env assumptions | Uses deployment env vars |
| Works locally only | Works in prod and local |
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";
// This runs when the module is imported.
const apiKey = process.env.OPENAI_API_KEY;
const model = new ChatOpenAI({
apiKey,
model: "gpt-4o-mini",
});
export const graph = new StateGraph({
channels: {},
});
// Later inside a node...
async function callModel() {
return await model.invoke("Hello");
}
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langchain/langgraph";
function getModel() {
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("OPENAI_API_KEY is missing at runtime");
}
return new ChatOpenAI({
apiKey,
model: "gpt-4o-mini",
});
}
export const graph = new StateGraph({
channels: {},
});
async function callModel() {
const model = getModel();
return await model.invoke("Hello");
}
The important detail: don’t assume the environment present during build is the same as the environment present during execution. In serverless and containerized deployments, that assumption breaks constantly.
Other Possible Causes
1) Wrong environment variable name
LangChain wrappers expect specific variable names. If you set OPENAI_KEY instead of OPENAI_API_KEY, your code may compile and deploy fine, then fail at runtime with an auth error.
// WRONG
process.env.OPENAI_KEY;
// RIGHT
process.env.OPENAI_API_KEY;
If you use Anthropic or other providers through LangChain, check their exact env var names too:
// Example for Anthropic
process.env.ANTHROPIC_API_KEY;
2) Build-time env injection instead of runtime env injection
This happens a lot in Next.js, Vite, and CI/CD pipelines. The key gets baked into the bundle for local testing, but production runtime doesn’t have it.
// WRONG: only available during build
{
"scripts": {
"build": "OPENAI_API_KEY=dev-key npm run compile"
}
}
// RIGHT: set it in the platform's runtime config
{
"env": {
"OPENAI_API_KEY": "set-in-vercel-or-ecs-task-definition"
}
}
For Docker:
# Better than baking secrets into the image
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
3) Using a dev key in production
A lot of teams accidentally deploy their .env.local value to staging or prod. Some providers reject keys tied to restricted projects or deleted orgs with errors that look like invalid authentication.
// Debug this immediately if it prints something like "sk-proj-dev..."
console.log("Using key prefix:", process.env.OPENAI_API_KEY?.slice(0, 10));
If you see a development prefix or an old project ID, rotate and replace it in your secret store.
4) Key passed into one node but not another
LangGraph graphs often have multiple nodes. One node may be configured correctly while another creates its own client without credentials.
// WRONG
const nodeA = async () => model.invoke("A");
const nodeB = async () => {
const badModel = new ChatOpenAI({ model: "gpt-4o-mini" });
return badModel.invoke("B");
};
// RIGHT
const nodeB = async () => {
const model = getModel();
return model.invoke("B");
};
In production graphs, consistency matters more than convenience. Create clients through one factory and reuse that path everywhere.
How to Debug It
- •
Log the presence of the env var, not the full secret
- •Check whether it exists at runtime.
- •Example:
console.log("OPENAI_API_KEY exists:", Boolean(process.env.OPENAI_API_KEY)); - •
Confirm where the graph runs
- •Local Node.js process?
- •Serverless function?
- •Docker container?
- •Worker queue?
If the error only happens after deployment, your issue is almost always environment propagation.
- •
Inspect the exact stack trace
- •Look for LangChain/LangGraph wrapper calls such as:
- •
ChatOpenAI - •
ChatAnthropic - •
invoke() - •
generate()
- •
- •The failing node tells you which client was created with bad credentials.
- •Look for LangChain/LangGraph wrapper calls such as:
- •
Test with a minimal standalone script
- •Strip out LangGraph and call the provider directly.
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ apiKey: process.env.OPENAI_API_KEY, model: "gpt-4o-mini", }); console.log(await model.invoke("ping"));If this fails too, it’s not your graph logic. It’s configuration.
Prevention
- •Centralize client creation in one factory function.
- •Validate required secrets on startup and fail fast if they’re missing.
- •Store keys only in runtime secret managers:
- •Vercel Environment Variables
- •AWS Secrets Manager / ECS task env
- •GCP Secret Manager / Cloud Run env
A simple guard saves hours later:
export function requireEnv(name: string): string {
const value = process.env[name];
if (!value) throw new Error(`${name} is required`);
return value;
}
Use that before constructing any LangGraph node client. That way “invalid API key in production” becomes a startup failure you can catch immediately instead of a broken graph after deploy.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit