How to Fix 'authentication failed' in AutoGen (TypeScript)
What “authentication failed” means in AutoGen
This error usually means AutoGen reached the model provider, but the request was rejected before any completion was generated. In practice, it shows up when your API key is missing, invalid, pointing to the wrong provider, or being sent through a misconfigured model client.
In TypeScript projects, it often appears during OpenAIChatCompletionClient setup or on the first agent call, especially after copying env vars between local dev, CI, and Docker.
The Most Common Cause
The #1 cause is a bad or missing API key in the model client config. With AutoGen TypeScript, people often set apiKey to process.env.OPENAI_API_KEY without checking whether that env var is actually loaded at runtime.
Here’s the broken pattern and the fixed pattern side by side:
| Broken | Fixed |
|---|---|
| ```ts | |
| import { OpenAIChatCompletionClient } from "@autogen/core"; |
const modelClient = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
|ts
import "dotenv/config";
import { OpenAIChatCompletionClient } from "@autogen/core";
const apiKey = process.env.OPENAI_API_KEY; if (!apiKey) { throw new Error("OPENAI_API_KEY is missing"); }
const modelClient = new OpenAIChatCompletionClient({ model: "gpt-4o-mini", apiKey, });
If the key is empty, malformed, or has whitespace/newline issues from `.env`, the provider will reject it and you’ll see errors like:
- `authentication failed`
- `401 Unauthorized`
- `OpenAIError: Incorrect API key provided`
- `Request failed with status code 401`
A small detail matters here: `process.env.OPENAI_API_KEY` can be `undefined` in Node if you never loaded `.env`, and AutoGen won’t magically fix that for you.
## Other Possible Causes
### 1. Wrong provider for the model name
If you point an OpenAI client at an Azure/OpenRouter/Anthropic model string, auth may fail even though the key is valid.
```ts
// Wrong: OpenAI client with a non-OpenAI deployment/model reference
new OpenAIChatCompletionClient({
model: "claude-3-5-sonnet",
apiKey: process.env.OPENAI_API_KEY!,
});
Fix it by matching the client to the provider:
// Example: use the correct provider-specific client/config
new AnthropicChatCompletionClient({
model: "claude-3-5-sonnet",
apiKey: process.env.ANTHROPIC_API_KEY!,
});
2. Azure OpenAI config is incomplete
Azure needs endpoint, deployment name, API version, and key. Missing any one of these can surface as auth failure.
new AzureOpenAIChatCompletionClient({
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
apiKey: process.env.AZURE_OPENAI_API_KEY!,
deploymentName: "gpt-4o-mini",
apiVersion: "2024-06-01",
});
Common mistake:
new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
deploymentName: "gpt-4o-mini",
});
3. Environment variables are not loaded in runtime
This happens in tests, Docker containers, serverless functions, and npm scripts.
// Broken if dotenv isn't loaded before this line runs
const apiKey = process.env.OPENAI_API_KEY;
Fix:
import "dotenv/config";
Or explicitly load it at startup:
import dotenv from "dotenv";
dotenv.config();
4. The key has trailing whitespace or quotes
A copied secret can include newline characters or quotes depending on how it was pasted.
OPENAI_API_KEY="sk-proj-abc123 "
That trailing space can break auth. Use a clean value:
OPENAI_API_KEY=sk-proj-abc123
If you load secrets from another source, trim them before passing them into AutoGen:
const apiKey = process.env.OPENAI_API_KEY?.trim();
How to Debug It
- •
Verify the exact client class
- •Check whether you’re using
OpenAIChatCompletionClient,AzureOpenAIChatCompletionClient, or another provider-specific client. - •Don’t assume one API key works across providers.
- •Check whether you’re using
- •
Print what AutoGen is actually receiving
- •Log sanitized config values before instantiating the client.
- •You want to confirm
apiKeyexists and isn’t empty.
console.log({
hasApiKey: Boolean(process.env.OPENAI_API_KEY),
model: "gpt-4o-mini",
});
- •
Test the same credentials outside AutoGen
- •Call the provider directly with a minimal request.
- •If raw HTTP fails with
401, this is not an AutoGen bug.
- •
Check your runtime environment
- •Local shell may have env vars that your test runner doesn’t.
- •Docker images often miss
.env. - •CI secrets may be named differently from local secrets.
Prevention
- •Fail fast on startup if required secrets are missing.
if (!process.env.OPENAI_API_KEY) {
throw new Error("Missing OPENAI_API_KEY");
}
- •
Keep provider config explicit.
Don’t reuse one generic “model client” config across OpenAI, Azure OpenAI, Anthropic, and OpenRouter unless you’ve verified each field.
- •
Add a smoke test that hits the model client once during deploy validation.
That catches bad keys before your agent workflow starts failing in production.
If you’re seeing authentication failed in AutoGen TypeScript, start with the API key and provider/client mismatch first. In most cases, that’s where the bug is hiding.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit