How to Fix 'authentication failed' in LangGraph (Python)
What authentication failed usually means
In LangGraph, authentication failed almost always means the graph is trying to call a backend service with missing, invalid, or mismatched credentials. In practice, this shows up when your graph invokes an LLM, a checkpointer, a LangSmith endpoint, or another provider and the token/API key is wrong.
You’ll usually see it during graph.invoke(...), graph.stream(...), or when compiling/running a graph that touches external services.
The Most Common Cause
The #1 cause is simple: the API key is not loaded into the process that runs your LangGraph code.
This happens a lot when:
- •the key exists in your shell but not in your app runtime
- •
.envis not loaded - •you set the wrong environment variable name
- •you pass an empty string to the client
Here’s the broken pattern versus the fixed one.
| Broken | Fixed |
|---|---|
| Key never loaded | Key loaded before client creation |
| Client created with empty auth | Client created after env resolution |
# broken.py
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
llm = ChatOpenAI(model="gpt-4o") # expects OPENAI_API_KEY in env
def call_model(state):
return {"messages": [llm.invoke(state["messages"])]}
builder = StateGraph(dict)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()
result = graph.invoke({"messages": [{"role": "user", "content": "Hello"}]})
If OPENAI_API_KEY is missing, you’ll often get something like:
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided'}}
Or from LangGraph-adjacent services:
langgraph_api.errors.AuthenticationError: authentication failed
Now the fixed version:
# fixed.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
load_dotenv() # loads OPENAI_API_KEY from .env before client creation
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY is missing")
llm = ChatOpenAI(model="gpt-4o", api_key=api_key)
def call_model(state):
response = llm.invoke(state["messages"])
return {"messages": [response]}
builder = StateGraph(dict)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
builder.add_edge("call_model", END)
graph = builder.compile()
result = graph.invoke({"messages": [{"role": "user", "content": "Hello"}]})
The important part is ordering:
- •load env first
- •validate the key exists
- •construct the client after that
Other Possible Causes
1. Wrong provider key for the model you’re using
A common mistake is using an OpenAI key with an Anthropic model wrapper, or vice versa.
# wrong: Anthropic model with OpenAI key
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-sonnet-latest") # expects ANTHROPIC_API_KEY
Fix:
import os
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-3-5-sonnet-latest",
api_key=os.environ["ANTHROPIC_API_KEY"],
)
2. Environment variable name mismatch
LangChain integrations expect specific variable names. If you set OPEN_AI_KEY instead of OPENAI_API_KEY, auth will fail.
# wrong
export OPEN_AI_KEY=sk-...
# right
export OPENAI_API_KEY=sk-...
Same issue for common providers:
- •
ANTHROPIC_API_KEY - •
GOOGLE_API_KEY - •
AZURE_OPENAI_API_KEY - •
LANGSMITH_API_KEY
3. LangSmith tracing configured with bad credentials
If your graph uses LangSmith tracing and the project/auth config is wrong, you may see auth failures even though model calls work.
export LANGCHAIN_TRACING_V2=true
export LANGSMITH_API_KEY=bad-token
export LANGSMITH_ENDPOINT=https://api.smith.langchain.com
Fix by validating all three values:
- •
LANGCHAIN_TRACING_V2=true - •valid
LANGSMITH_API_KEY - •correct endpoint for your deployment
4. Checkpointer or hosted LangGraph endpoint auth is wrong
If you’re using a remote checkpointer or LangGraph Cloud/Platform endpoint, stale tokens can trigger:
langgraph_api.errors.AuthenticationError: authentication failed
Example config issue:
checkpointer = RemoteSaver(
api_url="https://your-endpoint",
api_key="old-token"
)
Fix:
- •rotate the token if needed
- •verify tenant/project scoping
- •confirm the token matches the environment you’re hitting
How to Debug It
- •
Print the exact failing stack trace
- •Find whether the exception comes from:
- •
openai.AuthenticationError - •
anthropic.AuthenticationError - •
langgraph_api.errors.AuthenticationError
- •
- •That tells you whether it’s model auth or LangGraph service auth.
- •Find whether the exception comes from:
- •
Check which client is being created
- •Log the model wrapper and its resolved config.
- •Verify the API key source before instantiation.
import os
print("OPENAI_API_KEY present:", bool(os.getenv("OPENAI_API_KEY")))
print("ANTHROPIC_API_KEY present:", bool(os.getenv("ANTHROPIC_API_KEY")))
- •Run the provider call outside LangGraph
- •Test the model client directly.
- •If direct invocation fails, LangGraph is not the problem.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
print(llm.invoke("ping"))
- •Check runtime environment differences
- •Local shell vs Docker container vs CI/CD vs notebook are often different.
- •A key in your terminal does not exist inside a container unless passed through.
Prevention
- •Load and validate all secrets at startup before building graphs.
- •Keep provider keys explicit in production code instead of relying on hidden environment assumptions.
- •Add a startup health check that verifies each external dependency once before serving traffic.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit