How to Fix 'authentication failed' in LangChain (Python)
When LangChain throws authentication failed, it usually means the client reached the provider, but the API key, token, or auth context was rejected. In Python, this shows up most often when you instantiate an LLM or chat model with the wrong environment variable, a stale key, or a provider-specific config mismatch.
The tricky part is that the error often appears far away from the real bug. The stack trace might mention openai.AuthenticationError, anthropic.AuthenticationError, or a generic ValueError: Authentication failed, even though the actual issue is just one bad env var.
The Most Common Cause
The #1 cause is simple: you passed the wrong API key, or LangChain is reading an empty one.
This happens a lot when people set OPENAI_API_KEY in one shell, run Python from another, or use the wrong variable name for the provider. It also happens when .env is loaded too late.
| Broken pattern | Fixed pattern |
|---|---|
| Reads env var after client creation | Loads env before creating the LangChain model |
| Uses wrong key name | Uses provider-correct env var |
| Hardcodes a stale key | Injects a valid key at runtime |
# BROKEN
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o") # reads OPENAI_API_KEY now
# .env loaded too late
from dotenv import load_dotenv
load_dotenv()
print(llm.invoke("Hello"))
# FIXED
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o") # picks up OPENAI_API_KEY correctly
print(llm.invoke("Hello"))
Another common broken version is using the wrong env var name:
# BROKEN
import os
from langchain_openai import ChatOpenAI
os.environ["API_KEY"] = "sk-..." # LangChain/OpenAI will ignore this
llm = ChatOpenAI(model="gpt-4o")
# FIXED
import os
from langchain_openai import ChatOpenAI
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = ChatOpenAI(model="gpt-4o")
If you’re using Anthropic, Azure OpenAI, Cohere, or Google, the variable names differ. Don’t assume every provider reads OPENAI_API_KEY.
Other Possible Causes
1) Wrong model/provider package
A lot of auth failures are really “wrong client for this endpoint.”
# BROKEN: OpenAI client pointed at Anthropic-style usage
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="claude-3-5-sonnet",
api_key="sk-an-openai-key"
)
# FIXED: use the correct LangChain integration
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-3-5-sonnet",
api_key="ant-..."
)
2) Stale or revoked API key
If your key was rotated, revoked, or copied with extra whitespace, you’ll get auth failures that look valid at a glance.
api_key = "sk-live-key\n" # hidden newline from copy/paste
llm = ChatOpenAI(api_key=api_key)
Fix by trimming and validating:
api_key = os.environ["OPENAI_API_KEY"].strip()
llm = ChatOpenAI(api_key=api_key)
3) Environment variables not available in your runtime
This is common in Docker, CI/CD, notebooks, and serverless jobs.
# .env exists locally but not inside container
docker run my-app python app.py
Fix by passing env vars explicitly:
docker run \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
my-app python app.py
For notebooks:
import os
print(repr(os.getenv("OPENAI_API_KEY")))
If that prints None, LangChain has nothing to send.
4) Azure OpenAI config mismatch
Azure uses deployment names and endpoint settings. If you pass OpenAI-style parameters to Azure classes, auth can fail even with a valid key.
# BROKEN: missing Azure endpoint / deployment setup
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
model="gpt-4o",
api_key=os.environ["AZURE_OPENAI_API_KEY"]
)
# FIXED: provide Azure-specific config
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT"],
api_version="2024-02-15-preview",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
)
How to Debug It
- •
Print the exact env vars your process sees
- •Check for
None, empty strings, and whitespace. - •Use
repr()so hidden newlines show up.
import os print("OPENAI_API_KEY =", repr(os.getenv("OPENAI_API_KEY"))) print("ANTHROPIC_API_KEY =", repr(os.getenv("ANTHROPIC_API_KEY"))) - •Check for
- •
Confirm you’re using the right LangChain integration
- •
ChatOpenAIfor OpenAI-compatible endpoints. - •
ChatAnthropicfor Anthropic. - •
AzureChatOpenAIfor Azure OpenAI. - •Don’t mix model names across providers.
- •
- •
Call the provider directly outside LangChain
- •If raw SDK auth fails too, the problem is not LangChain.
- •For example, test with
openaioranthropicSDKs directly using the same key.
- •
Check where
.envgets loaded- •Load it before importing/constructing clients.
- •In apps with multiple entry points, make sure every path loads config consistently.
Prevention
- •Load secrets at process startup, before any model objects are created.
- •Keep provider-specific config in one place:
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •
AZURE_OPENAI_ENDPOINT - •
AZURE_OPENAI_DEPLOYMENT
- •
- •Add a startup assertion so bad config fails fast:
import os
assert os.getenv("OPENAI_API_KEY"), "Missing OPENAI_API_KEY"
If you still see authentication failed, treat it as a config problem first, not a LangChain bug. In practice, that’s where it usually lives.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit