How to Fix 'authentication failed during development' in LangGraph (TypeScript)
When you see authentication failed during development in a LangGraph TypeScript app, it usually means your graph is trying to talk to a remote service without valid credentials in the local dev path. In practice, this shows up when you run a graph, hit the LangGraph Platform API, or connect to a provider like OpenAI/Anthropic with missing or misloaded env vars.
The failure is almost always configuration, not LangGraph itself. The trick is figuring out whether the bad auth is coming from your LangGraph client, your model provider, or a deployment-time secret that never made it into your local process.
The Most Common Cause
The #1 cause is a missing or incorrectly loaded API key in your TypeScript runtime.
In LangGraph apps, people often assume .env is loaded automatically. It is not. If process.env.LANGCHAIN_API_KEY, process.env.OPENAI_API_KEY, or the LangGraph Platform token is undefined at runtime, you’ll get auth failures that look like this:
- •
Error: authentication failed during development - •
401 Unauthorized - •
LangGraphPlatformError: Authentication failed - •
OpenAIError: 401 Incorrect API key provided
Here’s the broken pattern and the fixed pattern side by side.
| Broken | Fixed |
|---|---|
| ```ts | |
| import { ChatOpenAI } from "@langchain/openai"; | |
| import { StateGraph } from "@langgraph/langgraph"; |
const llm = new ChatOpenAI({ model: "gpt-4o-mini", apiKey: process.env.OPENAI_API_KEY, });
const graph = new StateGraph();
|ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { StateGraph } from "@langgraph/langgraph";
if (!process.env.OPENAI_API_KEY) { throw new Error("Missing OPENAI_API_KEY"); }
const llm = new ChatOpenAI({ model: "gpt-4o-mini", apiKey: process.env.OPENAI_API_KEY, });
const graph = new StateGraph();
If you are using LangGraph Platform locally, the same rule applies to the platform client:
| Broken | Fixed |
|---|---|
| ```ts
import { Client } from "@langchain/langgraph-sdk";
const client = new Client({
apiUrl: "https://api.langgraph.dev",
apiKey: process.env.LANGGRAPH_API_KEY,
});
``` | ```ts
import "dotenv/config";
import { Client } from "@langchain/langgraph-sdk";
if (!process.env.LANGGRAPH_API_KEY) {
throw new Error("Missing LANGGRAPH_API_KEY");
}
const client = new Client({
apiUrl: "https://api.langgraph.dev",
apiKey: process.env.LANGGRAPH_API_KEY,
});
``` |
The important part is failing early. Don’t let the app continue with an undefined key and then debug a vague auth error later.
## Other Possible Causes
### 1. Wrong environment file for the runtime
A lot of TypeScript projects use `.env.local`, but Node only loads what you explicitly load.
```ts
// broken
import { config } from "dotenv";
config({ path: ".env.development" }); // file does not exist
Fix it by loading the actual file used in your project:
import { config } from "dotenv";
config({ path: ".env.local" });
If you use Next.js, remember server and client env handling are different. OPENAI_API_KEY must stay server-side only.
2. Using a browser-exposed variable for a server-only secret
This one bites people using Vite, Next.js, or Remix.
// broken
const apiKey = import.meta.env.VITE_OPENAI_API_KEY;
That may work in dev tooling, but it’s the wrong place for secrets. Use server-only env vars instead:
// fixed
const apiKey = process.env.OPENAI_API_KEY;
If you need browser access to a public config value, that value should not be an API key.
3. Mixing up LangSmith/LangChain/LangGraph credentials
These SDKs are related, but their auth expectations are not identical.
// broken
new Client({
apiUrl: "https://api.langgraph.dev",
apiKey: process.env.LANGCHAIN_API_KEY,
});
Use the correct key for the service you’re calling:
// fixed
new Client({
apiUrl: "https://api.langgraph.dev",
apiKey: process.env.LANGGRAPH_API_KEY,
});
For tracing and observability, separate those concerns:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=lsv2_...
OPENAI_API_KEY=sk-...
LANGGRAPH_API_KEY=lg_...
4. Invalid key format or revoked token
Sometimes the variable exists, but the value is stale.
OPENAI_API_KEY=sk-old-revoked-key
That produces a real provider-side auth failure even though your code looks fine. Rotate the key and verify it works outside LangGraph first:
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
If that fails with 401, LangGraph is not the problem.
How to Debug It
- •
Print presence, not secrets
console.log({ hasOpenAiKey: !!process.env.OPENAI_API_KEY, hasLangGraphKey: !!process.env.LANGGRAPH_API_KEY, });If one is false, stop there.
- •
Check where the error originates
- •If you see
OpenAIErrororAnthropicError, it’s provider auth. - •If you see
LangGraphPlatformErroror401 Unauthorizedfromapi.langgraph.dev, it’s platform auth. - •If you only see
authentication failed during development, inspect your local env loading path first.
- •If you see
- •
Verify dotenv runs before imports that use env vars
import "dotenv/config"; import { Client } from "@langchain/langgraph-sdk";If you load env after creating clients, too late.
- •
Run one direct API call outside LangGraph Test the model provider with a minimal script. If that fails, fix credentials before touching your graph code.
Prevention
- •Fail fast on startup if required env vars are missing.
- •Keep separate keys for:
- •model providers like OpenAI/Anthropic
- •LangSmith tracing
- •LangGraph Platform access
- •Add a startup check in every agent service:
const required = ["OPENAI_API_KEY", "LANGGRAPH_API_KEY"]; for (const name of required) { if (!process.env[name]) throw new Error(`Missing ${name}`); }
If you treat auth as explicit configuration instead of “something dotenv probably handled,” this error disappears quickly and stays gone.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit