How to Fix 'authentication failed in production' in LangChain (Python)
What this error means
authentication failed in production usually means LangChain successfully reached your model provider, but the credentials it used were missing, wrong, expired, or not available in the runtime environment. In practice, this shows up when code works locally and fails after deployment to Docker, ECS, Lambda, Render, Railway, or Kubernetes.
The failure often surfaces through a provider exception wrapped by LangChain, such as openai.AuthenticationError, anthropic.AuthenticationError, or a generic langchain_core.exceptions.OutputParserException if the auth failure happens downstream during chain execution.
The Most Common Cause
The #1 cause is simple: your local .env file is loaded in development, but production never gets the same environment variables.
With LangChain, this often looks fine on your laptop because load_dotenv() picks up OPENAI_API_KEY, but the deployed container has no such variable. The chain initializes, then the first API call dies with an auth error.
| Broken pattern | Fixed pattern |
|---|---|
Reads secrets only from local .env | Reads secrets from real production env vars |
Assumes load_dotenv() works in prod | Fails fast if required vars are missing |
| Creates client without validation | Validates config before building chain |
# broken.py
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
load_dotenv() # works locally, useless if .env is not shipped to prod
llm = ChatOpenAI(model="gpt-4o-mini") # expects OPENAI_API_KEY in env
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
result = chain.invoke({"text": "policy wording"})
print(result)
# fixed.py
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("Missing OPENAI_API_KEY in production environment")
llm = ChatOpenAI(model="gpt-4o-mini", api_key=api_key)
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
result = chain.invoke({"text": "policy wording"})
print(result)
If you are using OpenAI directly underneath LangChain, the raw error often looks like:
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided'}}
For Anthropic-backed chains, you may see:
anthropic.AuthenticationError: Error code: 401 - invalid x-api-key
Other Possible Causes
1) Wrong environment variable name for the provider
LangChain integrations expect specific env var names. If you set API_KEY instead of OPENAI_API_KEY, the client initializes without credentials.
# broken
API_KEY=sk-...
# fixed
OPENAI_API_KEY=sk-...
For Anthropic:
ANTHROPIC_API_KEY=your-key-here
For Azure OpenAI, the issue is often not just the key but also endpoint and version:
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_endpoint="https://my-resource.openai.azure.com/",
api_version="2024-02-15-preview",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
azure_deployment="gpt-4o"
)
2) Secret exists locally but not in the deployed runtime
This happens a lot in Docker and serverless. Your build step may have access to secrets, but the running container does not.
# broken: assumes .env will be present at runtime
COPY . .
CMD ["python", "app.py"]
Use real runtime injection instead:
# fixed: rely on injected env vars from platform secret manager
COPY . .
CMD ["python", "app.py"]
And confirm your platform injects variables at runtime:
- •ECS task definition env/secrets section
- •Kubernetes Secret mounted as env var
- •Lambda environment variables in function config
3) Key rotation or expired credentials
If your org rotates keys weekly or monthly, old deployments keep using stale values until redeployed.
# broken assumption: hardcoded old key somewhere in config management
os.environ["OPENAI_API_KEY"] = "sk-old-key"
Fix by pulling from a secret manager at deploy time or startup:
import os
def get_api_key():
key = os.getenv("OPENAI_API_KEY")
if not key:
raise RuntimeError("OPENAI_API_KEY missing")
return key
4) Provider mismatch in your LangChain integration
Sometimes the code imports one provider package but points at another backend. For example, using ChatOpenAI with an Anthropic key or vice versa.
# broken: OpenAI wrapper with Anthropic credential assumptions
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key=os.getenv("ANTHROPIC_API_KEY"))
Use the correct integration class:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
How to Debug It
- •Print which env vars are actually present in production
- •Do not print full secrets.
- •Check presence only.
import os
for name in ["OPENAI_API_KEY", "ANTHROPIC_API_KEY", "AZURE_OPENAI_API_KEY"]:
print(name, bool(os.getenv(name)))
- •
Reproduce outside LangChain with the raw provider SDK
- •If raw SDK fails with 401/403, LangChain is not the problem.
- •If raw SDK works, inspect how LangChain constructs the client.
- •
Log the exact exception class
- •Look for
openai.AuthenticationError,anthropic.AuthenticationError, or HTTP status401. - •Wrap your call and inspect
type(e).__name__.
- •Look for
try:
chain.invoke({"text": "hello"})
except Exception as e:
print(type(e).__name__, str(e))
raise
- •Verify deployment config, not just source code
- •Check container/task/env settings.
- •Confirm secrets are injected into the running process.
- •Restart after rotating keys; old pods keep old env values.
Prevention
- •Fail fast at startup if required keys are missing. Do not wait until first request.
- •Use one secret source per environment: local
.envfor dev only, secret manager for prod. - •Add a health check that performs a minimal provider call so auth issues surface before traffic hits user requests.
If you want fewer midnight incidents, treat LLM auth like database auth: explicit config, runtime validation, and no hidden local-only assumptions.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit