How to Fix 'intermittent 500 errors during development' in AutoGen (TypeScript)
If you’re seeing intermittent 500 errors while developing with AutoGen in TypeScript, you’re usually dealing with a server-side failure inside your agent runtime, not a browser issue. In practice, it shows up when an agent call works once, then fails on the next request because of bad message shape, invalid tool output, or a runtime mismatch.
The annoying part is that the error is often generic: 500 Internal Server Error, Unexpected token, or Error in AssistantAgent.run() with little context. In AutoGen TS, that usually means the bug is in your message pipeline or tool execution path.
The Most Common Cause
The #1 cause is malformed message history passed into AssistantAgent or ChatCompletionClient. AutoGen expects a clean sequence of messages with valid roles and content, and developers often push raw objects, undefined, or partially formatted tool results into the conversation.
Here’s the broken pattern versus the fixed pattern:
| Broken | Fixed |
|---|---|
| Pushes raw objects / inconsistent roles | Uses valid AutoGen message types |
Can trigger 500 Internal Server Error during model call | Produces stable agent runs |
| Often happens after tool execution | Tool output is normalized before append |
// ❌ Broken
import { AssistantAgent } from "@autogen/agent";
const agent = new AssistantAgent({
name: "support-agent",
modelClient,
});
const messages = [
{ role: "user", content: "Check claim status" },
{ role: "assistant", content: { text: "Working on it" } }, // invalid shape
undefined, // breaks serialization intermittently
];
const result = await agent.run(messages);
// ✅ Fixed
import {
AssistantAgent,
type TextMessage,
} from "@autogen/agent";
const agent = new AssistantAgent({
name: "support-agent",
modelClient,
});
const messages: TextMessage[] = [
{ role: "user", content: "Check claim status" },
{ role: "assistant", content: "Working on it" },
];
const result = await agent.run(messages);
A second version of this same problem happens with tool output. If your function returns an object and you pass it directly as assistant content, some model clients will serialize it badly and fail with a generic 500.
// ❌ Broken
const toolResult = await lookupClaim("CLM-123");
messages.push({
role: "assistant",
content: toolResult, // object, not string
});
// ✅ Fixed
const toolResult = await lookupClaim("CLM-123");
messages.push({
role: "assistant",
content: JSON.stringify(toolResult),
});
Other Possible Causes
1) Tool function throws and gets wrapped as a 500
If a registered tool throws synchronously or rejects without handling, AutoGen often surfaces it as a server error instead of the original stack trace.
const tools = [
async function getPolicy(policyId: string) {
if (!policyId) throw new Error("policyId is required");
return fetchPolicy(policyId);
},
];
Fix by validating inputs before registration and wrapping failures:
async function getPolicy(policyId: string) {
try {
if (!policyId) return { error: "policyId is required" };
return await fetchPolicy(policyId);
} catch (err) {
return { error: String(err) };
}
}
2) Model client config is unstable across requests
If you recreate the client per request with different env values, timeout settings, or base URLs, you can get intermittent failures.
// ❌ Broken pattern
export async function handler(req: Request) {
const modelClient = new OpenAIChatCompletionClient({
model: process.env.MODEL_NAME!,
apiKey: process.env.OPENAI_API_KEY!,
timeoutMs: Math.random() > 0.5 ? 5000 : undefined,
});
}
Keep config deterministic:
// ✅ Stable config
const modelClient = new OpenAIChatCompletionClient({
model: process.env.MODEL_NAME!,
apiKey: process.env.OPENAI_API_KEY!,
timeoutMs: 30000,
});
3) Message history grows too large
Long development sessions can push token limits over the edge. The failure may look like a random 500 when the real issue is context overflow.
if (messages.length > 20) {
messages.splice(0, messages.length - 20);
}
Use trimming or summarization before every run.
4) Mixed package versions in AutoGen packages
AutoGen TS packages need compatible versions. A mismatch between @autogen/agent, @autogen/core, and provider packages can produce runtime errors that look like transport failures.
Check this first:
npm ls @autogen/agent @autogen/core @autogen/openai
If you see multiple versions installed, align them to one release line.
How to Debug It
- •
Log the exact payload before calling
run()- •Print message roles, content types, and tool outputs.
- •You’re looking for
undefined, objects where strings are expected, or malformed assistant/tool messages.
- •
Wrap tool calls separately
- •If the stack trace disappears inside
AssistantAgent.run(), isolate each tool. - •Temporarily replace tools with mocks that return static strings.
- •If the stack trace disappears inside
- •
Reduce to one user message
- •Start with only:
[{ role: "user", content: "Hello" }] - •If that works, reintroduce history one item at a time until the failure returns.
- •Start with only:
- •
Enable full stack traces and inspect network/server logs
- •In Node.js, run with:
NODE_OPTIONS=--trace-uncaught node dist/index.js - •Also inspect your provider response body if you have access to it. A lot of “intermittent” issues are deterministic once you see the real exception.
- •In Node.js, run with:
Prevention
- •Use strict TypeScript types for all message construction. Don’t build AutoGen messages from loose JSON blobs.
- •Normalize every tool result to either a string or a validated serializable object before appending it to history.
- •Keep one pinned version set for all AutoGen packages and avoid mixing beta/stable releases in the same repo.
- •Add a small regression test that runs
AssistantAgent.run()with one user message and one mocked tool call before merging changes.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit