How to Fix 'output parsing error' in AutoGen (TypeScript)
Getting an output parsing error in AutoGen TypeScript usually means the framework expected a structured response, but the model returned something it couldn’t parse. In practice, this shows up when you use tool calling, structured outputs, or an agent pipeline that expects JSON-like data and the LLM replies with extra text, malformed JSON, or the wrong schema.
This is not a model bug. It’s almost always a contract mismatch between what your code expects and what the agent actually produced.
The Most Common Cause
The #1 cause is asking AutoGen to parse a response as structured data while your prompt allows free-form text.
This usually happens with AssistantAgent, OpenAIChatCompletionClient, or any setup where you pass a parser/schema but the model returns prose like:
Sure — here's the result: {...}
That extra text breaks parsing.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Prompt asks for JSON, but no strict schema enforcement | Use a structured output contract and keep the assistant output constrained |
| Model returns markdown fences or commentary | Return only raw JSON matching the expected shape |
// BROKEN
import { AssistantAgent } from "@autogen/agent";
import { OpenAIChatCompletionClient } from "@autogen/openai";
const client = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
});
const agent = new AssistantAgent({
name: "support_agent",
modelClient: client,
systemMessage: `
Return a JSON object with keys:
- status
- message
`,
});
const result = await agent.run("Summarize the ticket.");
console.log(result);
If the model replies with:
Sure — here's the JSON:
{
"status": "ok",
"message": "Ticket resolved"
}
you’ll get an error like:
output parsing error: Could not parse model response as expected JSON
Here’s the fixed version:
// FIXED
import { AssistantAgent } from "@autogen/agent";
import { OpenAIChatCompletionClient } from "@autogen/openai";
const client = new OpenAIChatCompletionClient({
model: "gpt-4o-mini",
});
const agent = new AssistantAgent({
name: "support_agent",
modelClient: client,
systemMessage: `
You must output ONLY valid JSON.
No markdown.
No explanation.
No code fences.
Schema:
{
"status": "ok" | "error",
"message": string
}
`,
});
const result = await agent.run("Summarize the ticket.");
console.log(result);
The important part is not just “ask for JSON.” You need to remove every escape hatch that lets the model add prose.
Other Possible Causes
1) Tool call returned invalid arguments
If you use tools, AutoGen may try to parse function arguments into a typed shape. A malformed tool payload will trigger parsing failures.
// Example of a bad tool schema expectation
const tools = [{
name: "create_ticket",
description: "Create support ticket",
parameters: {
type: "object",
properties: {
priority: { type: "string" }
},
required: ["priority"]
}
}];
If the model emits:
{ "priority": high }
that is invalid JSON because high is unquoted. The fix is to make sure your tool schema is strict and your prompt encourages exact argument emission.
2) Schema mismatch between prompt and parser
Sometimes your prompt says one thing and your parser expects another.
// Parser expects:
// { id: string; score: number }
// But prompt asks for:
// { ticketId: string; confidence: string }
That mismatch produces errors like:
output parsing error: Unexpected property 'ticketId'
output parsing error: Expected number for field 'score'
Fix both sides so they agree on field names and types.
3) Markdown fences around JSON
A lot of models wrap structured output in triple backticks. Some parsers accept that, many do not.
// BAD OUTPUT FROM MODEL
```json
{
"status": "ok"
}
If your parser expects raw JSON, this fails. Tell the model to omit fences entirely.
### 4) Streaming partial output being parsed too early
If you’re consuming streamed tokens and parsing before completion, you’ll see intermittent parse failures.
```ts
// Pseudocode
for await (const chunk of stream) {
parse(chunk); // wrong - chunk is incomplete
}
Only parse after the full assistant message has been assembled.
How to Debug It
- •
Log the raw assistant output
- •Don’t inspect only the parsed object.
- •Print exactly what came back before AutoGen tries to interpret it.
- •If you see commentary, markdown fences, or trailing commas, you found the issue.
- •
Check whether tools are involved
- •If you’re using function calling or typed tools, inspect the emitted arguments.
- •A broken tool payload often looks like valid English but invalid JSON.
- •Verify both tool schema and actual response shape.
- •
Compare expected schema vs actual payload
- •Write down what your parser expects.
- •Compare field names, types, required keys, and nesting.
- •Watch for subtle mismatches like
stringvsnumber, oruser_idvsuserId.
- •
Disable streaming and simplify the prompt
- •Temporarily run non-streaming.
- •Remove all extra instructions except “return only valid JSON.”
- •If it starts working, reintroduce complexity until it breaks again.
Prevention
- •Use strict schemas for anything machine-parsed.
- •Tell agents to return raw JSON only, with no markdown or explanation.
- •Add a small validation layer before handing results to downstream code.
A good production pattern is to treat LLM output like untrusted input. Parse it once, validate it immediately, and fail fast with enough logging to see exactly what AutoGen received versus what your app expected.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit