How to Fix 'invalid API key' in AutoGen (TypeScript)
Opening
invalid API key in AutoGen usually means the SDK reached the model provider, but the key it sent was missing, malformed, or rejected. In TypeScript projects, this most often shows up when you wire OpenAIChatCompletionClient, AzureOpenAIChatCompletionClient, or a custom model client with the wrong env var shape.
The error typically appears at agent startup or on the first LLM call, right after something like assistant.run(...), team.run(...), or modelClient.create(...).
The Most Common Cause
The #1 cause is simple: you passed the wrong environment variable name, or you read it incorrectly in TypeScript.
With AutoGen TypeScript, people often copy Python-style examples or assume the SDK will automatically read .env values without explicit wiring. It won’t.
Broken vs fixed
| Broken | Fixed |
|---|---|
| Reads the wrong env var | Uses the exact provider key name |
Passes undefined into the client | Fails fast if the key is missing |
| Lets runtime error happen later | Validates config before creating agents |
// ❌ Broken
import { OpenAIChatCompletionClient } from "@autogenai/core";
const client = new OpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY_WRONG,
model: "gpt-4o-mini",
});
// ✅ Fixed
import { OpenAIChatCompletionClient } from "@autogenai/core";
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("Missing OPENAI_API_KEY");
}
const client = new OpenAIChatCompletionClient({
apiKey,
model: "gpt-4o-mini",
});
If you’re using Azure OpenAI, the same mistake happens with the wrong env var pair:
// ❌ Broken
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY,
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
deploymentName: "gpt-4o-mini",
});
// ✅ Fixed
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
deploymentName: "gpt-4o-mini",
});
For Azure, mixing up OPENAI_API_KEY and AZURE_OPENAI_API_KEY is a classic way to get an authentication failure that looks like an invalid key problem.
Other Possible Causes
1. Extra whitespace or quotes in .env
A copied key can include a trailing space or smart quotes.
# ❌ Broken
OPENAI_API_KEY="sk-proj-abc123 "
# ✅ Fixed
OPENAI_API_KEY=sk-proj-abc123
If you load env vars manually, trim them:
const apiKey = process.env.OPENAI_API_KEY?.trim();
2. The key belongs to a different provider
This happens when you send an OpenAI key to Azure OpenAI, or an Azure key to OpenAI.
// ❌ Broken: OpenAI key used with Azure client
new AzureOpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
deploymentName: "gpt-4o-mini",
});
// ✅ Fixed
new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
deploymentName: "gpt-4o-mini",
});
3. Wrong model client configuration
AutoGen errors can look like auth failures when the real issue is misconfigured base URL or provider settings.
// ❌ Broken
const client = new OpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY!,
baseURL: "https://api.openai.com/v2", // wrong path
model: "gpt-4o-mini",
});
// ✅ Fixed
const client = new OpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
If you override baseURL, verify it matches your provider exactly.
4. Using an expired or revoked key
A valid-looking string can still be dead. In logs, this often surfaces as:
- •
401 Unauthorized - •
AuthenticationError - •
invalid_api_key - •
Incorrect API key provided
Rotate the key and update your secret store:
export OPENAI_API_KEY="new-key-here"
Then restart your app. Don’t rely on hot reload to pick up secrets unless your runtime explicitly supports it.
How to Debug It
- •
Print whether the key exists, not the full value
console.log("OPENAI_API_KEY present:", Boolean(process.env.OPENAI_API_KEY));If this is
false, stop there. Your problem is env loading, not AutoGen. - •
Check which client class you are using
- •
OpenAIChatCompletionClientexpects an OpenAI-compatible key and config. - •
AzureOpenAIChatCompletionClientexpects Azure credentials and endpoint. - •A custom wrapper may be rewriting headers incorrectly.
- •
- •
Inspect the exact request path If you set a custom
baseURL, log it before instantiating the client. A bad base URL can produce auth-like failures that are really routing problems. - •
Reproduce with a minimal script Strip out agents, teams, tools, and memory. Call only the model client once:
const response = await client.create([ { role: "user", content: "Say hello" }, ]); console.log(response);If this fails, your issue is definitely credentials or provider config.
Prevention
- •Validate secrets at startup and fail fast if they’re missing.
- •Keep separate env vars for each provider:
- •
OPENAI_API_KEY - •
AZURE_OPENAI_API_KEY - •
ANTHROPIC_API_KEY
- •
- •Add a small smoke test that calls your model client before shipping changes.
- •Store keys in your secret manager, not in
.env.example, git history, or CI logs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit