How to Fix 'authentication failed during development' in LangGraph (Python)
What this error means
authentication failed during development usually means LangGraph tried to call a backend service, but the credentials it found were missing, invalid, or not available in the current process. In practice, this shows up during local runs when you’re using LangGraph Studio, the LangGraph SDK, or a graph that depends on OpenAI, Anthropic, or LangSmith credentials.
The key point: this is rarely a LangGraph bug. It’s almost always an auth/config issue in your Python app, environment variables, or local dev setup.
The Most Common Cause
The #1 cause is mixing up local environment variables with what your LangGraph process actually sees at runtime.
A common pattern is setting LANGGRAPH_API_KEY, OPENAI_API_KEY, or LANGSMITH_API_KEY in one shell, then running Python from another process that never inherited those values.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Set env vars in one terminal, run app elsewhere | Load env vars in the same runtime |
Assume .env is picked up automatically | Explicitly load .env before creating the client |
Use placeholder keys like sk-test | Use valid keys for the target service |
# broken.py
from langgraph_sdk import get_client
client = get_client(url="http://localhost:2024")
thread = client.threads.create()
# fixed.py
import os
from dotenv import load_dotenv
from langgraph_sdk import get_client
load_dotenv() # make sure .env is loaded in this process
# If you're using LangGraph Cloud / Studio auth:
# export LANGGRAPH_API_KEY=...
# If your graph calls OpenAI:
# export OPENAI_API_KEY=...
client = get_client(
url=os.getenv("LANGGRAPH_URL", "http://localhost:2024"),
api_key=os.getenv("LANGGRAPH_API_KEY"),
)
thread = client.threads.create()
If you’re calling model providers inside the graph, also make sure those provider keys are present in the same environment:
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
If OPENAI_API_KEY is missing, you may see downstream auth failures that look like LangGraph problems but are actually provider failures.
Other Possible Causes
1) Wrong environment for the wrong endpoint
Local development and hosted LangGraph deployments do not use the same auth flow.
# wrong: using cloud key against local server
export LANGGRAPH_API_KEY=lg_dev_123
python app.py
# right: use local server without cloud auth if your setup expects it
export LANGGRAPH_URL=http://localhost:2024
unset LANGGRAPH_API_KEY
python app.py
If you’re connecting to LangGraph Cloud, keep the API key. If you’re running a local server, confirm whether your setup expects anonymous local access or a separate dev token.
2) .env file exists but never loads
This happens when people install python-dotenv but forget to call load_dotenv().
# broken.py
from langgraph_sdk import get_client
client = get_client(url="http://localhost:2024", api_key=None)
# fixed.py
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("LANGGRAPH_API_KEY")
If you rely on environment files, load them before any imports that read credentials indirectly.
3) Provider key missing inside the graph worker
Your outer app may authenticate fine, but the graph runtime itself can fail when it calls a model provider.
from langgraph.graph import StateGraph
def call_model(state):
# This will fail if OPENAI_API_KEY isn't available in the worker env
return {"answer": "..."}
builder = StateGraph(dict)
Fix it by ensuring the worker/container has the same secrets as your app:
# docker-compose.yml snippet
services:
app:
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
LANGSMITH_API_KEY: ${LANGSMITH_API_KEY}
If you deploy via Docker, Kubernetes, or a job runner, check secret injection at that layer too.
4) Stale cached credentials in your shell or IDE
Your terminal may still have an old token exported from another project.
echo $LANGGRAPH_API_KEY
echo $OPENAI_API_KEY
If those values are stale or wrong, clear them and re-export clean values:
unset LANGGRAPH_API_KEY OPENAI_API_KEY LANGSMITH_API_KEY
export LANGGRAPH_API_KEY=your_real_key_here
export OPENAI_API_KEY=your_real_openai_key_here
IDE run configurations are another trap. VS Code and PyCharm often use their own environment blocks instead of your shell session.
How to Debug It
- •
Print what your process actually sees
- •Check
os.getenv()values at startup. - •Don’t trust what you exported in another terminal.
import os print("LANGGRAPH_API_KEY:", bool(os.getenv("LANGGRAPH_API_KEY"))) print("OPENAI_API_KEY:", bool(os.getenv("OPENAI_API_KEY"))) print("LANGSMITH_API_KEY:", bool(os.getenv("LANGSMITH_API_KEY"))) - •Check
- •
Identify which service is failing
- •If the error happens when creating threads/runs with
langgraph_sdk, it’s likely LangGraph auth. - •If it happens inside node execution when calling a model class like
ChatOpenAI, it’s probably provider auth.
- •If the error happens when creating threads/runs with
- •
Run with verbose logging
- •Turn on debug logs and inspect whether the failure comes from HTTP 401/403 responses.
- •A real auth failure usually looks like:
- •
401 Unauthorized - •
403 Forbidden - •
AuthenticationError - •
PermissionDeniedError
- •
- •
Reproduce with minimal code
- •Strip your app down to one client call.
- •Add dependencies back one by one until it breaks again.
from langgraph_sdk import get_client client = get_client(url="http://localhost:2024") print(client.assistants.list())
If this fails immediately, the problem is not your graph logic. It’s auth or endpoint configuration.
Prevention
- •
Keep all secrets in one source of truth:
- •
.envfor local dev only - •secret manager for deployed environments
- •
- •
Validate required env vars at startup:
required = ["LANGGRAPH_URL", "OPENAI_API_KEY"] missing = [k for k in required if not os.getenv(k)] if missing: raise RuntimeError(f"Missing env vars: {missing}") - •
Match credentials to runtime:
- •local LangGraph server config for local runs
- •cloud API key for hosted runs
- •provider keys available inside every worker/container
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit