How to Fix 'invalid API key' in LlamaIndex (Python)
Opening
invalid API key in LlamaIndex usually means the underlying LLM or embedding provider rejected the credential you passed in. It typically shows up when you initialize OpenAI, Anthropic, Gemini, or another provider-backed component, then call a query engine, chat engine, or index build.
In practice, the error is rarely “LlamaIndex is broken.” It’s usually a bad environment variable, the wrong client class, or a key being read from the wrong place.
The Most Common Cause
The #1 cause is this: you set an API key somewhere, but LlamaIndex never reads it because the provider class expects a specific env var name or explicit parameter.
A common mistake is mixing direct provider initialization with LlamaIndex defaults.
| Broken pattern | Fixed pattern |
|---|---|
| Key is not loaded before import/use | Key is loaded explicitly or exported correctly |
| Uses the wrong env var name | Uses the provider’s expected env var |
Assumes Settings auto-detects everything | Passes the key directly when needed |
Broken code
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
# This does nothing if OPENAI_API_KEY is not actually set in your shell
Settings.llm = OpenAI(model="gpt-4o-mini")
# Later...
response = Settings.llm.complete("Hello")
print(response)
If OPENAI_API_KEY is missing or invalid, you’ll usually get something like:
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided'}}
Or from a higher-level LlamaIndex flow:
ValueError: Invalid OpenAI API key
Fixed code
import os
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY is missing")
Settings.llm = OpenAI(
model="gpt-4o-mini",
api_key=api_key,
)
response = Settings.llm.complete("Hello")
print(response)
If you prefer env vars, make sure they’re actually exported in the same process:
export OPENAI_API_KEY="sk-..."
python app.py
Not in another terminal. Not in your IDE settings unless that IDE launches the process with those variables.
Other Possible Causes
1) You used the wrong environment variable name
Different providers expect different names. If you set LLAMA_INDEX_API_KEY, that does not magically work for OpenAI.
# Wrong for OpenAI-backed components
export LLAMA_INDEX_API_KEY="sk-..."
# Correct for OpenAI
export OPENAI_API_KEY="sk-..."
For Anthropic-backed components:
export ANTHROPIC_API_KEY="sk-ant-..."
For Google Gemini:
export GOOGLE_API_KEY="AIza..."
2) Your .env file is not being loaded
LlamaIndex does not automatically guarantee your .env file is loaded unless your app does it.
# Broken: dotenv never loaded
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini")
Fix it by loading env vars first:
from dotenv import load_dotenv
load_dotenv()
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini")
If you’re using FastAPI, Celery, or a notebook, verify the runtime loads .env before instantiating clients.
3) You passed a project ID or secret token instead of an API key
This happens a lot with cloud dashboards. The value looks valid, but it’s not an API key.
# Wrong: this might be a project ID, service account field, or session token
llm = OpenAI(model="gpt-4o-mini", api_key="proj_12345")
Use the actual provider-issued API key:
llm = OpenAI(model="gpt-4o-mini", api_key="sk-proj-...")
The format matters less than the credential type. A valid-looking string can still fail with:
AuthenticationError: Incorrect API key provided.
4) You are using one provider’s class with another provider’s key
This shows up when someone copies sample code and swaps only part of it.
# Broken: Anthropic key used with OpenAI client
from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="gpt-4o-mini",
api_key="sk-ant-..." # wrong provider family
)
Use the matching class and credential pair:
from llama_index.llms.anthropic import Anthropic
llm = Anthropic(
model="claude-3-5-sonnet-latest",
api_key="sk-ant-..."
)
How to Debug It
- •
Print what your process actually sees
import os print(os.getenv("OPENAI_API_KEY"))If this prints
None, your app never received the variable. - •
Check whether the error comes from LlamaIndex or the provider SDK
- •Provider errors look like:
- •
openai.AuthenticationError - •
anthropic.AuthenticationError - •
google.api_core.exceptions.Unauthenticated
- •
- •LlamaIndex wrappers may surface:
- •
ValueError: Invalid API key - •
RuntimeError: Error during LLM call
- •
- •Provider errors look like:
- •
Instantiate the client explicitly
from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-4o-mini", api_key="sk-...")If explicit construction works but env-based config fails, your problem is loading order or env wiring.
- •
Test outside LlamaIndex Call the provider SDK directly with the same credential. If that fails, stop debugging LlamaIndex and fix auth first.
from openai import OpenAI as OAI client = OAI(api_key="sk-...") print(client.models.list())
Prevention
- •Load secrets at process start and fail fast if they’re missing.
- •Don’t let your app reach query-time before discovering auth is broken.
- •Keep provider credentials and client classes matched.
- •
OpenAIwithOPENAI_API_KEY - •
AnthropicwithANTHROPIC_API_KEY - •
GeminiwithGOOGLE_API_KEY
- •
- •Add a startup health check.
- •Verify auth once on boot instead of waiting for user traffic to expose it.
If you’re seeing invalid API key in LlamaIndex, treat it like an integration bug first, not an indexing bug. In most cases, fixing credential source and client/provider mismatch resolves it immediately.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit