How to Fix 'invalid API key' in LangGraph (Python)
When LangGraph throws invalid API key, it usually means the underlying model client never received a valid credential, or it received one in the wrong format. In practice, this shows up when you build a graph that calls OpenAI, Anthropic, or another provider through LangChain/LangGraph and the key is missing, malformed, or loaded too late.
The error often appears at graph invocation time, not when you define the graph. That makes it annoying to trace, because the failure is usually in environment setup or client construction, not in the graph logic itself.
The Most Common Cause
The #1 cause is simple: you created the model before loading the environment variable, or you passed the wrong variable name.
With LangGraph, your graph node often wraps a ChatOpenAI, ChatAnthropic, or similar model. If that client is instantiated before os.environ["OPENAI_API_KEY"] exists, you’ll get errors like:
- •
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided...'}} - •
langchain_core.exceptions.OutputParserExceptionif downstream code masks the real issue - •
ValueError: Did not find openai_api_key... - •
AuthenticationError: invalid_api_key
Broken vs fixed
| Broken | Fixed |
|---|---|
python\n# broken.py\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.graph import StateGraph\n\n# API key not loaded yet\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\ngraph = StateGraph(dict)\n# ... build nodes using model ...\n | python\n# fixed.py\nimport os\nfrom dotenv import load_dotenv\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.graph import StateGraph\n\nload_dotenv() # load before creating clients\nassert os.getenv(\"OPENAI_API_KEY\"), \"OPENAI_API_KEY is missing\"\n\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\ngraph = StateGraph(dict)\n# ... build nodes using model ...\n |
If you’re using Anthropic, the pattern is identical:
# broken
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
# fixed
import os
from dotenv import load_dotenv
load_dotenv()
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-5-sonnet-latest",
anthropic_api_key=os.environ["ANTHROPIC_API_KEY"],
)
The important detail: don’t assume LangGraph will “pick up” your key later. The model client reads credentials when it’s constructed or first used.
Other Possible Causes
1) Wrong environment variable name
A common mistake is setting API_KEY or OPENAI_KEY instead of the provider-specific variable.
# wrong
export API_KEY=sk-...
export OPENAI_KEY=sk-...
# right
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-...
LangChain integrations look for specific names. If you’re using custom deployment settings, pass the key explicitly to the client.
2) Extra whitespace or quotes in .env
This happens a lot when copying from password managers or shell scripts.
# broken
OPENAI_API_KEY=" sk-proj-abc123 "
ANTHROPIC_API_KEY='claude-key-with-spaces '
Use clean values:
# fixed
OPENAI_API_KEY=sk-proj-abc123
ANTHROPIC_API_KEY=claude-key-with-spaces
If you suspect hidden characters:
import os
key = os.getenv("OPENAI_API_KEY", "")
print(repr(key))
If you see leading/trailing spaces, strip them before passing to the client.
3) .env file never loaded in your runtime
In notebooks, worker processes, Docker containers, and serverless runtimes, .env is not loaded automatically.
from dotenv import load_dotenv
load_dotenv() # required in local dev if you're relying on .env files
In production containers, prefer real environment variables over .env files. For example:
ENV OPENAI_API_KEY=${OPENAI_API_KEY}
Or inject them through your deployment platform’s secret manager.
4) Mixing providers with the wrong key
If your graph uses Anthropic but you set an OpenAI key, the error can still look like an auth failure.
# broken: Anthropic model with OpenAI env var only present
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
Make sure each node uses matching credentials:
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
openai_model = ChatOpenAI(model="gpt-4o-mini")
anthropic_model = ChatAnthropic(model="claude-3-5-sonnet-latest")
5) Rotated or revoked key
Sometimes nothing is wrong in code. The provider just invalidated the secret.
Typical signs:
- •It worked yesterday and fails today.
- •The same code works on one machine but not another.
- •The error message includes
401 Unauthorized.
Replace the secret in your vault or provider dashboard and redeploy.
How to Debug It
- •
Print the active environment variables before constructing models
import os print("OPENAI_API_KEY:", repr(os.getenv("OPENAI_API_KEY"))) print("ANTHROPIC_API_KEY:", repr(os.getenv("ANTHROPIC_API_KEY")))If this prints
None, your runtime never loaded the secret. - •
Check whether the model is being created too early Look for module-level code like this:
model = ChatOpenAI(model="gpt-4o-mini")Move client creation inside a function after config loading.
- •
Run a direct provider call outside LangGraph Before debugging graph nodes, validate auth with a minimal call:
from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4o-mini") print(model.invoke("ping"))If this fails with
AuthenticationError, LangGraph is not the problem. - •
Inspect container/runtime secrets In Docker, Kubernetes, CI/CD, or Lambda-style environments, verify that secrets are injected into the running process.
- •Docker:
docker exec <container> env | grep API - •Kubernetes: check
envFrom,secretKeyRef - •CI: confirm masked variables exist in job logs
- •Docker:
Prevention
- •
Load config first, instantiate clients second.
- •Put
load_dotenv()and secret validation at startup. - •Fail fast with an explicit assertion if a key is missing.
- •Put
- •
Use provider-specific keys and names.
- •
OPENAI_API_KEYfor OpenAI-compatible clients. - •
ANTHROPIC_API_KEYfor Anthropic clients. - •Don’t rely on generic names unless your wrapper maps them explicitly.
- •
- •
Keep secrets out of source control.
- •Store keys in environment variables or a secret manager.
- •Never hardcode API keys inside LangGraph nodes or config files.
If you’re still seeing invalid API key after checking these points, dump the exact exception class and stack trace from the model call inside your LangGraph node. In most cases, that will show whether this is an env loading issue, a provider mismatch, or a revoked credential.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit