How to Fix 'invalid API key in production' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-key-in-productionllamaindextypescript

What this error means

If you see invalid API key in production while using LlamaIndex TypeScript, the SDK is telling you the key it received is missing, malformed, or not available in the runtime where your app is executing.

This usually shows up after deployment to Vercel, AWS Lambda, Docker, or a serverless edge runtime when the code worked locally but fails once environment variables are loaded differently.

The Most Common Cause

The #1 cause is simple: the API key is being read at module load time, or from the wrong environment variable, before production env vars are available.

With LlamaIndex TypeScript, this often happens when you instantiate OpenAI or OpenAIEmbedding too early, then pass that into Settings or a query engine. In production, the value becomes undefined, and downstream calls fail with errors like:

  • OpenAIError: The api_key client option must be set
  • invalid_api_key
  • AuthenticationError: Incorrect API key provided

Broken vs fixed pattern

BrokenFixed
Reads env var once at import timeReads env var inside runtime initialization
Can freeze undefined into configEnsures env vars exist before creating clients
Fails in serverless/edge deploysWorks across local and production runtimes
// ❌ Broken: evaluated during import
import { OpenAI } from "@llamaindex/openai";
import { Settings } from "llamaindex";

const apiKey = process.env.OPENAI_API_KEY; // may be undefined in prod boot path

Settings.llm = new OpenAI({
  model: "gpt-4o-mini",
  apiKey,
});
// ✅ Fixed: initialize at request/runtime boundary
import { OpenAI } from "@llamaindex/openai";
import { Settings } from "llamaindex";

export function initLlamaIndex() {
  const apiKey = process.env.OPENAI_API_KEY;

  if (!apiKey) {
    throw new Error("Missing OPENAI_API_KEY");
  }

  Settings.llm = new OpenAI({
    model: "gpt-4o-mini",
    apiKey,
  });
}

If you’re using a singleton module that exports a prebuilt index or query engine, move initialization into a function. In production, module scope is where these bugs go to hide.

Other Possible Causes

1. Wrong environment variable name

A lot of teams store the key as OPENAI_APIKEY, OPEN_AI_API_KEY, or something custom, then wire the wrong name in code.

// ❌ Broken
const apiKey = process.env.OPEN_AI_API_KEY;

// ✅ Fixed
const apiKey = process.env.OPENAI_API_KEY;

If you’re deploying to Vercel or Netlify, confirm the exact variable name in the dashboard matches your code.

2. Using browser-side code for a server-only key

If this runs in a client bundle, the env var may be stripped out unless it’s explicitly exposed. That leads to runtime failures when LlamaIndex tries to call OpenAI from the browser.

// ❌ Broken: client component / browser bundle
"use client";

import { OpenAI } from "@llamaindex/openai";

const llm = new OpenAI({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});
// ✅ Fixed: keep API calls on the server
import { OpenAI } from "@llamaindex/openai";

export const llm = new OpenAI({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY!,
});

For Next.js App Router, put this in a server route, server action, or backend service. Never ship your private API key into browser code.

3. Key exists locally but not in production

This one is common with .env.local. Your laptop has the key; your deployment target doesn’t.

# local only
OPENAI_API_KEY=sk-proj-xxxx

In production, verify the secret exists in:

  • Vercel Environment Variables
  • AWS Lambda configuration / Secrets Manager
  • Docker runtime env
  • Kubernetes secret mount

If your app logs process.env.OPENAI_API_KEY === undefined, this is probably it.

4. Wrong provider client wired into LlamaIndex settings

Sometimes the key is valid, but you’re passing it to the wrong class or using a mismatched provider package version.

// ❌ Broken: mismatched config shape for current package version
import { OpenAI } from "@llamaindex/openai";

new OpenAI({
  model: "gpt-4o-mini",
  api_key: process.env.OPENAI_API_KEY,
});
// ✅ Fixed: use the expected property name for your installed version
new OpenAI({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

Check your installed package docs and lockfile. A lot of “invalid API key” issues are really “wrong constructor shape” issues after an upgrade.

How to Debug It

  1. Log whether the key exists at runtime

    console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));
    

    If this prints false in prod and true locally, you’ve found the gap.

  2. Check where initialization happens

    • If new OpenAI(...) runs at module scope, move it into a function.
    • If it runs in a client component, move it server-side.
    • If it runs during build time, switch to runtime init.
  3. Inspect the exact error class Look for messages like:

    • OpenAIError: The api_key client option must be set
    • AuthenticationError: Incorrect API key provided
    • invalid_api_key

    If you see auth errors before any model call succeeds, it’s almost always config-related.

  4. Verify deployment secrets directly

    • Print env vars in a secure startup check.
    • Confirm secret injection in your platform UI.
    • Redeploy after updating secrets; some platforms don’t refresh running instances automatically.

Prevention

  • Initialize LlamaIndex clients inside server-side functions, not at import time.
  • Fail fast with explicit checks:
    if (!process.env.OPENAI_API_KEY) throw new Error("Missing OPENAI_API_KEY");
    
  • Keep one source of truth for secrets across local .env, staging, and production.
  • Pin package versions for LlamaIndex provider packages so constructor shapes don’t drift under you.

If this error only appears after deployment, assume environment loading first and model logic second. In most TypeScript + LlamaIndex setups, that’s where the bug lives.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides