How to Fix 'invalid API key during development' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-key-during-developmentllamaindextypescript

When you see invalid API key during development in a LlamaIndex TypeScript app, it usually means the OpenAI client inside your index pipeline is getting an empty, malformed, or wrong key. In practice, this shows up when you run locally with .env files, serverless dev tools, or a mismatched runtime where the key exists in one place but not where LlamaIndex is actually reading it.

The error often surfaces as an OpenAI 401-style failure wrapped by LlamaIndex classes like OpenAI, OpenAIEmbedding, ServiceContext, or Settings. The fix is usually not in LlamaIndex itself; it’s in how you load and pass configuration.

The Most Common Cause

The #1 cause is loading environment variables too late, or not loading them at all before constructing LlamaIndex objects.

In TypeScript projects, this often happens when you import and instantiate OpenAI or VectorStoreIndex at module scope before dotenv.config() runs. By the time the client is created, process.env.OPENAI_API_KEY is still undefined.

Broken vs fixed pattern

BrokenFixed
dotenv.config() runs after imports/instantiationLoad env first, then construct clients
API key read from process.env before it's populatedExplicitly pass the key into OpenAI / settings
Works in one shell, fails in anotherDeterministic startup order
// broken.ts
import { config } from "dotenv";
import { OpenAI } from "llamaindex";

// Client created before env is loaded
const llm = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4o-mini",
});

config(); // too late

console.log("key?", process.env.OPENAI_API_KEY);
// fixed.ts
import { config } from "dotenv";
config();

import { OpenAI } from "llamaindex";

const apiKey = process.env.OPENAI_API_KEY;

if (!apiKey) {
  throw new Error("OPENAI_API_KEY is missing");
}

const llm = new OpenAI({
  apiKey,
  model: "gpt-4o-mini",
});

console.log("LLM ready");

If you’re using the newer global settings pattern, the same rule applies:

import { config } from "dotenv";
config();

import { Settings, OpenAI } from "llamaindex";

Settings.llm = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});

If the key is missing at assignment time, LlamaIndex will happily carry that broken state forward until the first request fails.

Other Possible Causes

1) Wrong environment variable name

A common mistake is setting OPEN_AI_API_KEY, API_KEY, or some custom name while your code reads OPENAI_API_KEY.

# broken .env
OPEN_AI_API_KEY=sk-...
// broken
const llm = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

Fix it by matching the exact variable name your code expects.

# fixed .env
OPENAI_API_KEY=sk-...

2) Whitespace or quotes in the key value

Copying keys from dashboards sometimes adds trailing spaces or hidden characters. That produces a valid-looking string that still fails auth.

// defensive trim
const apiKey = process.env.OPENAI_API_KEY?.trim();

If you’re reading from JSON or another config source:

const apiKey = String(config.openaiApiKey).trim();

3) Using a browser runtime instead of Node.js

LlamaIndex TypeScript integrations expect a Node-compatible runtime for most backend use cases. If you try to run server-only code in a browser bundle, environment access and secrets handling break fast.

// broken in client-side code
console.log(process.env.OPENAI_API_KEY);

Keep LlamaIndex calls on the server:

// app/api/chat/route.ts or similar server file
import { OpenAI } from "llamaindex";

4) Mixing up OpenAI and Azure/OpenRouter credentials

LlamaIndex’s OpenAI class expects an OpenAI-style key unless you’ve configured a different provider correctly. If you paste an Azure key into an OpenAI client without setting endpoint/base URL options, auth fails.

// broken: Azure key used as if it were standard OpenAI
new OpenAI({
  apiKey: process.env.AZURE_OPENAI_API_KEY!,
  model: "gpt-4o-mini",
});

For Azure, use the provider-specific configuration your stack requires rather than forcing it through the default OpenAI path.

How to Debug It

  1. Print what your process actually sees

    console.log("OPENAI_API_KEY exists:", !!process.env.OPENAI_API_KEY);
    console.log("length:", process.env.OPENAI_API_KEY?.length);
    

    If length is undefined or suspiciously short, your env loading is broken.

  2. Verify env loading happens before imports/initialization

    • Move dotenv.config() to the top of your entry file.
    • Avoid creating new OpenAI(...) at module scope until envs are confirmed loaded.
  3. Check whether the failure comes from LlamaIndex or upstream OpenAI You’ll usually see something like:

    • AuthenticationError: Incorrect API key provided
    • 401 Unauthorized
    • A wrapped error inside OpenAI / OpenAIAgentWorker / retrieval pipeline calls

    If direct SDK calls fail too, it’s not a LlamaIndex bug.

  4. Reduce to one minimal request Test with only:

    import { config } from "dotenv";
    config();
    
    import { OpenAI } from "llamaindex";
    
    const llm = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY!,
      model: "gpt-4o-mini",
    });
    
    const res = await llm.complete("Say hello");
    console.log(res.text);
    

    If this works, the issue is in your index setup, embeddings config, or runtime wiring.

Prevention

  • Load configuration once at startup and fail fast if required secrets are missing.
  • Keep all LlamaIndex calls on the server side; don’t let frontend code touch API keys.
  • Use explicit constructor args for critical settings like apiKey, especially in multi-env apps where implicit env lookup gets brittle.

If you’re building anything beyond a toy script, treat API keys like typed dependencies: validate them early, inject them explicitly, and never assume your runtime loaded them for you.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides