How to Fix 'authentication failed' in LangChain (TypeScript)
What this error means
authentication failed in LangChain usually means the model provider rejected your request before any generation happened. In TypeScript, you’ll typically see it when ChatOpenAI, AzureChatOpenAI, ChatAnthropic, or another provider wrapper is instantiated with a missing, wrong, or stale credential.
It often shows up during local dev after moving env vars around, switching providers, or deploying to a new environment where the secret never made it into runtime.
The Most Common Cause
The #1 cause is a bad API key setup: wrong env var name, undefined value at runtime, or loading .env too late.
With LangChain JS/TS, this usually surfaces as an error like:
- •
Error: 401 Unauthorized - •
AuthenticationError: Incorrect API key provided - •
Error: authentication failed - •
OpenAIError: Request failed with status code 401
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Reads env after client creation | Loads env before client creation |
| Uses wrong variable name | Uses provider-specific variable |
Passes undefined key | Fails fast if key is missing |
// ❌ Broken
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
apiKey: process.env.OPEN_AI_KEY, // wrong name
model: "gpt-4o-mini",
});
const res = await llm.invoke("Say hello");
console.log(res.content);
// ✅ Fixed
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("Missing OPENAI_API_KEY");
}
const llm = new ChatOpenAI({
apiKey,
model: "gpt-4o-mini",
});
const res = await llm.invoke("Say hello");
console.log(res.content);
If you’re using Azure OpenAI, the same mistake happens with the wrong env var names:
// ✅ Azure example
import "dotenv/config";
import { AzureChatOpenAI } from "@langchain/openai";
const llm = new AzureChatOpenAI({
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME,
azureOpenAIApiVersion: "2024-02-15-preview",
});
If any of those are missing or mismatched with your Azure resource, you’ll get a 401-style auth failure.
Other Possible Causes
1) .env is not loaded in time
If you import and instantiate the model before dotenv runs, process.env.* will be empty.
// ❌ Broken
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
import "dotenv/config";
// ✅ Fixed
import "dotenv/config";
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
});
2) You’re using the wrong provider’s key
An OpenAI key will not work with Anthropic, and vice versa. This sounds obvious, but it happens constantly in multi-provider apps.
// ❌ Broken: Anthropic wrapper with OpenAI key
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
apiKey: process.env.OPENAI_API_KEY,
});
// ✅ Fixed
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
3) Your deployment environment doesn’t have the secret
Local works, production fails. That usually means the secret exists on your laptop but not in Vercel, Docker, ECS, GitHub Actions, or your runtime platform.
# Example Docker Compose snippet
services:
app:
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
If ${OPENAI_API_KEY} is unset on the host machine at container start time, your app gets an empty value.
4) The key was revoked or rotated
If someone rotated credentials in the provider console, your app still holds the old one. The error message is often a plain auth failure without much detail.
// No code fix here; update the secret source.
// Check:
// - OpenAI dashboard
// - Anthropic console
// - Azure portal
// - Secret manager / CI variables
This also happens when copying keys with extra whitespace or newline characters from password managers or scripts.
How to Debug It
- •
Print whether the env var exists, not its full value
console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));If this prints
false, stop looking at LangChain and fix config loading first. - •
Check which class is throwing
- •
ChatOpenAIpoints to OpenAI config/auth. - •
AzureChatOpenAIpoints to Azure resource/auth. - •
ChatAnthropicpoints to Anthropic auth. - •A generic
401 Unauthorizedfrom a chain usually bubbles up from one of these wrappers.
- •
- •
Call the provider directly with the same key If direct SDK calls fail too, LangChain is not the problem.
import OpenAI from "openai"; const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); await client.models.list(); - •
Inspect deployment secrets separately from local
.env- •Local
.env - •CI secrets
- •Hosting platform env vars
- •Container runtime env vars
One of these is usually missing or stale.
- •Local
Prevention
- •Load config once at startup and fail fast if required keys are missing.
- •Keep provider keys named explicitly:
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •
AZURE_OPENAI_API_KEY
- •
- •Add a startup health check that verifies credentials before serving traffic.
- •Don’t commit
.envassumptions into code; make runtime config explicit in each environment.
If you want this class of issue to disappear in production, treat API keys like any other dependency: validate them at boot, log their presence safely, and keep provider-specific config isolated per integration.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit