How to Fix 'invalid API key' in LlamaIndex (TypeScript)
When LlamaIndex throws invalid API key, it usually means the OpenAI client inside your TypeScript app is sending an empty, malformed, or wrong credential. You’ll typically see this during index creation, embedding generation, or the first chat/completion call.
In practice, this error is almost never “LlamaIndex is broken.” It’s usually a config issue: wrong env var name, missing process.env wiring, bad .env loading order, or mixing providers with the wrong key.
The Most Common Cause
The #1 cause is simple: you passed the wrong environment variable name or never loaded it before constructing the LlamaIndex client.
With LlamaIndex TypeScript, the OpenAI-backed classes like OpenAI, OpenAIEmbedding, and higher-level helpers will read credentials from the OpenAI API key path. If that value is undefined, you’ll get errors like:
- •
Error: Invalid API key - •
OpenAIError: The API key provided is invalid - •
401 Unauthorized
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Reads the wrong env var | Uses the expected env var |
Constructs client before loading .env | Loads .env first |
| Passes an empty string | Passes a valid key |
// ❌ Broken
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
apiKey: process.env.OPEN_AI_KEY, // wrong variable name
model: "gpt-4o-mini",
});
const response = await llm.complete("Hello");
// ✅ Fixed
import "dotenv/config";
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
const response = await llm.complete("Hello");
If you’re using embeddings too, make sure you fix both places:
import "dotenv/config";
import { Settings, OpenAI, OpenAIEmbedding } from "llamaindex";
Settings.llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
Settings.embedModel = new OpenAIEmbedding({
apiKey: process.env.OPENAI_API_KEY,
});
Other Possible Causes
1) Your .env file is not loaded in time
If you access process.env before loading dotenv, the value will be undefined.
// ❌ Broken
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
import "dotenv/config";
// ✅ Fixed
import "dotenv/config";
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
If you use a custom entrypoint, verify your runtime actually loads .env. Some frameworks do not load it automatically.
2) You’re using the wrong provider’s key
An Anthropic key, Azure OpenAI token, or Gemini API key will not work against the OpenAI client. LlamaIndex TypeScript will still surface it as an auth failure because the downstream provider rejects it.
// ❌ Broken: Anthropic key passed into OpenAI client
const llm = new OpenAI({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// ✅ Fixed: use the matching provider class and credential
import { Anthropic } from "llamaindex";
const llm = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
If you are on Azure OpenAI, don’t use the plain OpenAI class unless your setup explicitly supports it. Use the Azure-specific configuration path for that deployment.
3) Your secret contains whitespace or quotes
This happens a lot when copying keys from dashboards or storing them in CI variables.
# ❌ Broken
OPENAI_API_KEY=" sk-proj-abc123 "
# ✅ Fixed
OPENAI_API_KEY=sk-proj-abc123
Also check for newline characters if your secret manager injects them. A trailing newline can be enough to trigger:
- •
Invalid API key - •
401 Unauthorized
4) You’re setting the key on one instance but using another
In LlamaIndex TypeScript, people often configure one object and accidentally call another that was created earlier without credentials.
// ❌ Broken
import { Settings, OpenAI } from "llamaindex";
const llm1 = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
Settings.llm = llm1;
const llm2 = new OpenAI(); // no key here
await llm2.complete("Test");
// ✅ Fixed
import { Settings, OpenAI } from "llamaindex";
Settings.llm = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
await Settings.llm.complete("Test");
This also shows up when a helper function creates its own client internally. Trace which instance actually makes the request.
How to Debug It
- •
Print the resolved value before creating the client
console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY)); console.log("Length:", process.env.OPENAI_API_KEY?.length);If it prints
falseor0, stop there. The problem is upstream of LlamaIndex. - •
Check which class is making the request
- •
OpenAI - •
OpenAIAgent - •
OpenAIPredictor - •
OpenAIEmbedding
If embeddings fail during index build but chat works later, your embedding config is probably missing or wrong.
- •
- •
Verify
.envloading order- •Put
import "dotenv/config";at the top of your entry file. - •In Next.js, confirm server-only code reads secrets on the server side.
- •In Docker/CI, confirm the variable exists in runtime, not just locally.
- •Put
- •
Swap in a hardcoded test key temporarily If allowed in your environment, set a known-good test credential directly in local dev:
const llm = new OpenAI({ apiKey: "sk-proj-test..." });If that works, your code path is fine and your env wiring is broken.
Prevention
- •
Load configuration once at startup and fail fast if required secrets are missing.
if (!process.env.OPENAI_API_KEY) { throw new Error("Missing OPENAI_API_KEY"); } - •
Keep provider-specific keys separate.
- •
OPENAI_API_KEY - •
ANTHROPIC_API_KEY - •
AZURE_OPENAI_API_KEY
- •
- •
Centralize model setup instead of creating ad hoc clients across files. That makes it obvious which credential each LlamaIndex class uses and prevents silent drift between embeddings and chat models.
If you’re still seeing invalid API key after checking these points, inspect the exact request path and provider class first. In TypeScript projects with LlamaIndex, auth failures almost always come down to config mismatch rather than library behavior.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit