How to Fix 'invalid API key during development' in LlamaIndex (Python)
When you see invalid API key during development in a LlamaIndex app, it usually means the OpenAI client inside your indexing/query pipeline is reading the wrong key, or no key at all. In practice, this shows up when you run a script locally, switch environments, or instantiate LlamaIndex objects before your environment variables are loaded.
The error often appears alongside openai.AuthenticationError, ValueError: Invalid API key, or a failure from OpenAIEmbedding, OpenAI, or ServiceContext setup. The fix is usually not in the model call itself, but in how credentials are loaded and passed into LlamaIndex.
The Most Common Cause
The #1 cause is simple: your code creates a LlamaIndex OpenAI-backed object before OPENAI_API_KEY is available, or you set the wrong environment variable name.
This pattern breaks a lot in local development:
| Broken | Fixed |
|---|---|
| API key missing at import/runtime | API key loaded before LlamaIndex objects are created |
| Hardcoded empty string | Explicitly pass a valid key or load from .env |
# ❌ Broken pattern
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
# OPENAI_API_KEY is not set yet, or .env wasn't loaded
llm = OpenAI(model="gpt-4o-mini")
embed_model = OpenAIEmbedding(model="text-embedding-3-small")
docs = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(docs, llm=llm, embed_model=embed_model)
# ✅ Fixed pattern
import os
from dotenv import load_dotenv
load_dotenv() # must happen before OpenAI / embedding objects are created
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
assert os.getenv("OPENAI_API_KEY"), "OPENAI_API_KEY is missing"
llm = OpenAI(model="gpt-4o-mini")
embed_model = OpenAIEmbedding(model="text-embedding-3-small")
docs = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(docs, llm=llm, embed_model=embed_model)
If you’re using .env, make sure it contains the exact variable name:
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxx
And if you’re exporting manually:
export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxx"
python app.py
Other Possible Causes
1. Wrong environment variable name
LlamaIndex does not guess your secret name. If you set OPEN_AI_KEY or API_KEY, the OpenAI client will still fail.
# ❌ Wrong
os.environ["API_KEY"] = "sk-..."
# ✅ Right
os.environ["OPENAI_API_KEY"] = "sk-..."
This also happens when your .env file uses the wrong casing:
# ❌ Wrong
openai_api_key=sk-...
# ✅ Right
OPENAI_API_KEY=sk-...
2. .env file is not being loaded
If you rely on python-dotenv, but never call load_dotenv(), the variable won’t exist at runtime.
# ❌ Broken
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini")
# ✅ Fixed
from dotenv import load_dotenv
load_dotenv()
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini")
If you run through Docker, VS Code, or pytest, confirm that process also sees the .env file. A local shell export does not automatically carry into containers or test runners.
3. You passed the key to the wrong place
Some developers set the key on their own config object but never connect it to LlamaIndex’s underlying provider.
# ❌ Broken: custom config exists, but LlamaIndex doesn't use it here
settings = {"api_key": "sk-..."}
llm = OpenAI(model="gpt-4o-mini")
# ✅ Fixed: rely on env var or pass supported args if your version allows it
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = OpenAI(model="gpt-4o-mini")
If you’re using older LlamaIndex versions with ServiceContext, be explicit:
from llama_index.core import ServiceContext
service_context = ServiceContext.from_defaults(
llm=OpenAI(model="gpt-4o-mini"),
embed_model=OpenAIEmbedding(model="text-embedding-3-small"),
)
4. Your key is valid elsewhere but blocked in this runtime
This happens when a CI job, notebook kernel, or container has stale credentials. You’ll see errors like:
- •
openai.AuthenticationError: Incorrect API key provided - •
ValueError: Invalid API key - •
AuthenticationError: Error code: 401
Check whether the runtime has an old value cached:
echo $OPENAI_API_KEY
In Python:
import os
print(os.getenv("OPENAI_API_KEY"))
If it prints None, empty string, or an old secret, that’s your issue.
How to Debug It
- •
Print the effective environment variable
- •Before creating any LlamaIndex object:
import os print(repr(os.getenv("OPENAI_API_KEY")))- •If this is
Noneor empty, stop there.
- •
Confirm dotenv loading order
- •
load_dotenv()must run before imports that instantiate clients. - •Bad order:
from llama_index.llms.openai import OpenAI from dotenv import load_dotenv load_dotenv()- •Good order:
from dotenv import load_dotenv load_dotenv() from llama_index.llms.openai import OpenAI - •
- •
Isolate the failing component
- •Test embedding and LLM separately.
from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms.openai import OpenAI print(OpenAI(model="gpt-4o-mini")) print(OpenAIEmbedding(model="text-embedding-3-small"))- •If one fails and the other doesn’t, you’ve narrowed it down.
- •
Check for version mismatches
- •Old examples online use deprecated APIs like
ServiceContext. - •Verify your installed packages:
pip show llama-index openai python-dotenv- •If you copied code from an older tutorial, update it to current LlamaIndex patterns.
- •Old examples online use deprecated APIs like
Prevention
- •
Load secrets once at process startup and fail fast if they’re missing.
assert os.getenv("OPENAI_API_KEY"), "Missing OPENAI_API_KEY" - •
Keep
.env, Docker env vars, CI secrets, and notebook kernels aligned. One runtime with a valid key and another without is how these bugs survive code review. - •
Add a tiny startup check in every AI service. If authentication fails during boot instead of mid-request, debugging gets much cheaper.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit