How to Fix 'authentication failed' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
authentication-failedlangchaintypescript

What this error means

authentication failed in LangChain usually means the model provider rejected your request before any generation happened. In TypeScript, you’ll typically see it when ChatOpenAI, AzureChatOpenAI, ChatAnthropic, or another provider wrapper is instantiated with a missing, wrong, or stale credential.

It often shows up during local dev after moving env vars around, switching providers, or deploying to a new environment where the secret never made it into runtime.

The Most Common Cause

The #1 cause is a bad API key setup: wrong env var name, undefined value at runtime, or loading .env too late.

With LangChain JS/TS, this usually surfaces as an error like:

  • Error: 401 Unauthorized
  • AuthenticationError: Incorrect API key provided
  • Error: authentication failed
  • OpenAIError: Request failed with status code 401

Broken vs fixed pattern

BrokenFixed
Reads env after client creationLoads env before client creation
Uses wrong variable nameUses provider-specific variable
Passes undefined keyFails fast if key is missing
// ❌ Broken
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  apiKey: process.env.OPEN_AI_KEY, // wrong name
  model: "gpt-4o-mini",
});

const res = await llm.invoke("Say hello");
console.log(res.content);
// ✅ Fixed
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
  throw new Error("Missing OPENAI_API_KEY");
}

const llm = new ChatOpenAI({
  apiKey,
  model: "gpt-4o-mini",
});

const res = await llm.invoke("Say hello");
console.log(res.content);

If you’re using Azure OpenAI, the same mistake happens with the wrong env var names:

// ✅ Azure example
import "dotenv/config";
import { AzureChatOpenAI } from "@langchain/openai";

const llm = new AzureChatOpenAI({
  azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
  azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_INSTANCE_NAME,
  azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME,
  azureOpenAIApiVersion: "2024-02-15-preview",
});

If any of those are missing or mismatched with your Azure resource, you’ll get a 401-style auth failure.

Other Possible Causes

1) .env is not loaded in time

If you import and instantiate the model before dotenv runs, process.env.* will be empty.

// ❌ Broken
import { ChatAnthropic } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

import "dotenv/config";
// ✅ Fixed
import "dotenv/config";
import { ChatAnthropic } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY!,
});

2) You’re using the wrong provider’s key

An OpenAI key will not work with Anthropic, and vice versa. This sounds obvious, but it happens constantly in multi-provider apps.

// ❌ Broken: Anthropic wrapper with OpenAI key
import { ChatAnthropic } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  apiKey: process.env.OPENAI_API_KEY,
});
// ✅ Fixed
import { ChatAnthropic } from "@langchain/anthropic";

const llm = new ChatAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

3) Your deployment environment doesn’t have the secret

Local works, production fails. That usually means the secret exists on your laptop but not in Vercel, Docker, ECS, GitHub Actions, or your runtime platform.

# Example Docker Compose snippet
services:
  app:
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}

If ${OPENAI_API_KEY} is unset on the host machine at container start time, your app gets an empty value.

4) The key was revoked or rotated

If someone rotated credentials in the provider console, your app still holds the old one. The error message is often a plain auth failure without much detail.

// No code fix here; update the secret source.
// Check:
// - OpenAI dashboard
// - Anthropic console
// - Azure portal
// - Secret manager / CI variables

This also happens when copying keys with extra whitespace or newline characters from password managers or scripts.

How to Debug It

  1. Print whether the env var exists, not its full value

    console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));
    

    If this prints false, stop looking at LangChain and fix config loading first.

  2. Check which class is throwing

    • ChatOpenAI points to OpenAI config/auth.
    • AzureChatOpenAI points to Azure resource/auth.
    • ChatAnthropic points to Anthropic auth.
    • A generic 401 Unauthorized from a chain usually bubbles up from one of these wrappers.
  3. Call the provider directly with the same key If direct SDK calls fail too, LangChain is not the problem.

    import OpenAI from "openai";
    
    const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
    await client.models.list();
    
  4. Inspect deployment secrets separately from local .env

    • Local .env
    • CI secrets
    • Hosting platform env vars
    • Container runtime env vars

    One of these is usually missing or stale.

Prevention

  • Load config once at startup and fail fast if required keys are missing.
  • Keep provider keys named explicitly:
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
    • AZURE_OPENAI_API_KEY
  • Add a startup health check that verifies credentials before serving traffic.
  • Don’t commit .env assumptions into code; make runtime config explicit in each environment.

If you want this class of issue to disappear in production, treat API keys like any other dependency: validate them at boot, log their presence safely, and keep provider-specific config isolated per integration.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides