How to Fix 'invalid API key in production' in CrewAI (Python)

By Cyprian AaronsUpdated 2026-04-21
invalid-api-key-in-productioncrewaipython

What this error means

invalid API key in production usually means CrewAI successfully loaded your code, but the LLM provider rejected the key at runtime. In practice, this shows up when you move from local dev to Docker, CI, Render, ECS, Fly.io, or a serverless runtime and the environment variable is missing, wrong, or not visible to the process.

The important detail: this is rarely a CrewAI bug. It’s almost always a config issue around OpenAI, ChatOpenAI, or whatever model provider your Agent is using.

The Most Common Cause

The #1 cause is hardcoding or loading the API key too late, then passing a blank value into CrewAI.

With CrewAI, the failure often surfaces when Agent tries to call the model through an LLM client and you see errors like:

  • openai.AuthenticationError: Incorrect API key provided
  • AuthenticationError: invalid_api_key
  • CrewAIException wrapping a provider auth failure

Broken vs fixed pattern

Broken patternFixed pattern
Reads env var after objects are createdLoads env var before instantiating Agent / Crew
Hardcodes key in codeUses environment variables
Passes empty string in productionValidates presence at startup
# broken.py
from crewai import Agent, Task, Crew
from crewai.llm import LLM
import os

# BAD: creating the LLM before ensuring env vars are loaded
llm = LLM(
    model="gpt-4o-mini",
    api_key=os.getenv("OPENAI_API_KEY")  # may be None in production
)

agent = Agent(
    role="Support analyst",
    goal="Answer customer questions",
    backstory="You handle policy questions.",
    llm=llm,
)

task = Task(
    description="Summarize this claim note",
    agent=agent,
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
# fixed.py
from crewai import Agent, Task, Crew
from crewai.llm import LLM
from dotenv import load_dotenv
import os

load_dotenv()  # local dev only; production should inject env vars directly

api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
    raise RuntimeError("OPENAI_API_KEY is missing")

llm = LLM(
    model="gpt-4o-mini",
    api_key=api_key,
)

agent = Agent(
    role="Support analyst",
    goal="Answer customer questions",
    backstory="You handle policy questions.",
    llm=llm,
)

task = Task(
    description="Summarize this claim note",
    agent=agent,
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)

If you’re using OpenAI-compatible providers through CrewAI, the same rule applies. The key must exist in the runtime environment that starts Python, not just on your laptop shell.

Other Possible Causes

1) Wrong environment variable name

A lot of deployments use OPENAI_API_KEY, but your code might be reading something else.

# broken
api_key = os.getenv("OPEN_AI_KEY")  # typo: wrong variable name
# fixed
api_key = os.getenv("OPENAI_API_KEY")

If you’re on Azure OpenAI or another provider, check their expected variable names too. Don’t assume all providers use the same naming convention.

2) .env works locally but not in production

python-dotenv loads files from disk. Production containers often do not include .env, so your app starts with no credentials.

# broken in prod if .env isn't baked into the image
from dotenv import load_dotenv
load_dotenv()
# fixed: use load_dotenv only for local development fallback
import os

if os.getenv("ENVIRONMENT") != "production":
    from dotenv import load_dotenv
    load_dotenv()

Better: inject secrets through your platform’s secret manager and keep .env out of production entirely.

3) Key is present locally but missing in Docker/CI

Your shell has the key, but the container does not.

# broken assumption: copying code does not copy secrets safely
COPY . /app
CMD ["python", "main.py"]
# fixed: pass secrets at runtime via env vars or secret store
ENV PYTHONUNBUFFERED=1
CMD ["python", "main.py"]

Then run it with:

docker run -e OPENAI_API_KEY="$OPENAI_API_KEY" my-app:latest

In GitHub Actions or similar CI systems, define the secret in repository settings and map it into the job environment.

4) Using a revoked or restricted key

A key can be valid syntactically but blocked by org policy, IP restrictions, usage limits, or provider-side revocation.

openai.AuthenticationError: Incorrect API key provided: sk-...

Check:

  • whether the key was rotated recently
  • whether your org disabled that project key
  • whether egress IP allowlists are blocking production traffic

5) Passing credentials to the wrong client object

Some developers set one client up correctly but instantiate another one without credentials.

# broken
from crewai.llm import LLM

llm = LLM(model="gpt-4o-mini")
# api_key never attached here if provider config isn't inherited correctly
# fixed
llm = LLM(
    model="gpt-4o-mini",
    api_key=os.environ["OPENAI_API_KEY"],
)

Be explicit when debugging. Implicit credential discovery is fine once it works; it’s bad when you’re trying to isolate failures.

How to Debug It

  1. Print what Python actually sees

    import os
    print(repr(os.getenv("OPENAI_API_KEY")))
    

    If this prints None, empty string, or a truncated value, you found the problem.

  2. Check where CrewAI is failing Look for whether the exception happens during:

    • Agent(...) construction
    • Crew.kickoff()
    • first model call inside task execution

    If it fails on kickoff with an auth error, it’s usually provider auth rather than CrewAI orchestration.

  3. Verify runtime environment separately from local shell Run:

    env | grep OPENAI_API_KEY
    

    Then compare that with what runs inside Docker/CI/serverless logs. Local success means nothing if the deployment environment doesn’t have the variable.

  4. Call the provider directly Before involving CrewAI, test raw SDK auth:

    from openai import OpenAI
    
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    print(client.models.list())
    

    If this fails with AuthenticationError, CrewAI is not the problem.

Prevention

  • Validate required secrets at process startup.
    • Fail fast with RuntimeError("OPENAI_API_KEY is missing").
  • Keep production secrets out of .env.
    • Use your platform’s secret manager instead.
  • Add a startup health check.
    • Log whether required env vars exist without printing full values.
  • Pin and test your deployment path.
    • Run one integration test in Docker or CI that calls Crew.kickoff() with real secret injection.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides