How to Fix 'deployment crash' in AutoGen (TypeScript)
What “deployment crash” usually means
In AutoGen TypeScript, a deployment crash almost always means the model call failed before the agent could complete a turn. In practice, it shows up when the OpenAI/Azure OpenAI deployment name is wrong, the endpoint is misconfigured, or the request hits a model that doesn’t support the parameters you’re sending.
You’ll usually see it during agent initialization or on the first assistant.run() / agent.send() call, with errors like:
- •
DeploymentNotFound - •
The API deployment for this resource does not exist - •
400 Bad Request - •
Error: deployment crashed
The Most Common Cause
The #1 cause is a bad Azure OpenAI deployment configuration in your OpenAIChatCompletionClient or equivalent AutoGen model client.
In TypeScript, people often confuse:
- •model name:
gpt-4o,gpt-4.1-mini - •deployment name: your Azure deployment label, like
prod-gpt4o
AutoGen sends requests to the deployment, not the raw model name, when you use Azure.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Uses model name where deployment name is expected | Uses actual Azure deployment name |
| Missing Azure endpoint/version | Correct endpoint and API version |
| Agent crashes on first LLM call | Agent returns normal response |
// ❌ Broken
import { AzureOpenAIChatCompletionClient } from "@autogen/core";
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
apiVersion: "2024-02-15-preview",
deploymentName: "gpt-4o", // wrong if your Azure deployment is named differently
});
// ✅ Fixed
import { AzureOpenAIChatCompletionClient } from "@autogen/core";
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
apiVersion: "2024-02-15-preview",
deploymentName: "prod-gpt4o", // exact Azure deployment name
});
If you’re using non-Azure OpenAI, don’t pass Azure fields at all. Use the OpenAI client with the model name directly.
// ✅ OpenAI (non-Azure)
import { OpenAIChatCompletionClient } from "@autogen/core";
const client = new OpenAIChatCompletionClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o",
});
Other Possible Causes
1) Wrong API version for the deployed model
Azure deployments are sensitive to API version. A valid deployment can still fail if the version doesn’t support that model or feature set.
// ❌ Too old / incompatible for your setup
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
apiVersion: "2023-05-15",
deploymentName: "prod-gpt4o",
});
// ✅ Match the version recommended for your resource/model
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY!,
endpoint: process.env.AZURE_OPENAI_ENDPOINT!,
apiVersion: "2024-02-15-preview",
deploymentName: "prod-gpt4o",
});
2) Missing or malformed environment variables
A blank endpoint or API key often surfaces as a generic crash downstream.
// ❌ env vars may be undefined at runtime
const client = new AzureOpenAIChatCompletionClient({
apiKey: process.env.AZURE_OPENAI_API_KEY,
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
apiVersion: "2024-02-15-preview",
deploymentName: "prod-gpt4o",
});
// ✅ fail fast before AutoGen starts
function required(name: string): string {
const value = process.env[name];
if (!value) throw new Error(`Missing env var ${name}`);
return value;
}
const client = new AzureOpenAIChatCompletionClient({
apiKey: required("AZURE_OPENAI_API_KEY"),
endpoint: required("AZURE_OPENAI_ENDPOINT"),
apiVersion: "2024-02-15-preview",
deploymentName: required("AZURE_OPENAI_DEPLOYMENT"),
});
3) Tool schema mismatch in an agent team
If you pass malformed tool definitions, some agent flows fail during planning/execution and look like a model crash.
// ❌ invalid tool schema shape
const tools = [
{
name: "lookupPolicy",
description: "Find policy details",
// missing proper input schema / function signature depending on your AutoGen version
parameters: {},
},
];
// ✅ define tools with the expected function signature/schema
const tools = [
{
name: "lookupPolicy",
description: "Find policy details",
parametersSchema: {
type: "object",
properties: {
policyId: { type: "string" },
},
required: ["policyId"],
additionalProperties: false,
},
execute: async ({ policyId }: { policyId?: string }) => {
return { policyId, statusCode": "ACTIVE" };
},
},
];
4) Context window overflow or oversized messages
Large histories can trigger request failures that bubble up as a crash during send.
// ❌ pushing huge payloads into chat history
await agent.send([
{
role: "user",
content: JSON.stringify(bigClaimDocument), // too large
},
]);
// ✅ summarize or trim before sending
await agent.send([
{
role": "user",
content": `Summarize claim ${claimId} using these extracted fields only...`,
},
]);
How to Debug It
- •
Print the exact error body
- •Don’t stop at
deployment crash. - •Log the full exception from AutoGen and the underlying HTTP response.
- •Look for:
- •
DeploymentNotFound - •
401 Unauthorized - •
404 Not Found - •
400 Bad Request
- •
- •Don’t stop at
- •
Verify the client config in isolation
- •Create a tiny script that only instantiates
AzureOpenAIChatCompletionClient. - •Call one minimal prompt.
- •If that fails, the issue is not your agent graph.
- •Create a tiny script that only instantiates
- •
Check deployment naming in Azure
- •Confirm the deployed resource name exactly matches what you put in code.
- •Compare against:
- •portal deployment name
- •environment variable value
- •casing and whitespace
- •
Remove everything except one agent and one prompt
- •Disable tools.
- •Disable multi-agent orchestration.
- •Send a short prompt like
"ping". - •If it works, reintroduce features one by one until it breaks.
Prevention
- •
Use explicit config validation at startup.
- •Fail fast on missing env vars, empty strings, and invalid URLs.
- •
Separate model name from deployment name in code.
- •In reviews, treat those as different concepts.
type LlmConfig = {
providerType": "azure" | "openai";
};
- •Add a smoke test for every environment.
test("LLM client boots", async () => {
const result = await client.create([{ role": "user", content": "ping" }]);
});
If you keep seeing deployment crash, start with the deployment config first. In AutoGen TypeScript, that’s where most of these failures come from.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit