How to Fix 'output parsing error when scaling' in CrewAI (TypeScript)
When CrewAI throws an output parsing error when scaling, it usually means one of your agents returned text that the framework could not convert into the structured output it expected. In TypeScript, this shows up most often when you scale from a single happy-path task to multiple agents, tools, or nested tasks and the model starts returning extra prose, malformed JSON, or a shape that does not match your schema.
The key point: this is usually not a “CrewAI is broken” problem. It is almost always a contract mismatch between what your agent was instructed to return and what your code expects to parse.
The Most Common Cause
The #1 cause is asking an agent for structured data but not enforcing a strict output format. In practice, the model returns something like Markdown, a bullet list, or JSON with trailing commentary, and CrewAI fails during parsing with errors such as:
- •
OutputParserException - •
Error: Failed to parse agent output - •
output parsing error when scaling
Here is the broken pattern I see most often in TypeScript.
| Broken | Fixed |
|---|---|
| Agent is told to “return JSON” but no schema is enforced | Agent is constrained with a clear format and validated before use |
| Code assumes raw text can be parsed later | Output is explicitly structured at the task boundary |
// BROKEN
import { Agent, Task, Crew } from "crewai";
const analyst = new Agent({
role: "Analyst",
goal: "Summarize customer complaints as JSON",
backstory: "You are precise and concise.",
});
const task = new Task({
description: `
Analyze the complaints and return JSON with keys:
summary, severity, categories
`,
agent: analyst,
});
const crew = new Crew({
agents: [analyst],
tasks: [task],
});
const result = await crew.kickoff();
// Later:
const parsed = JSON.parse(result.output); // often blows up
// FIXED
import { Agent, Task, Crew } from "crewai";
const analyst = new Agent({
role: "Analyst",
goal: "Return only valid JSON matching the required schema",
backstory: "You never add commentary outside the JSON object.",
});
const task = new Task({
description: `
Return ONLY valid JSON.
Schema:
{
"summary": string,
"severity": "low" | "medium" | "high",
"categories": string[]
}
No markdown. No explanation. No extra keys.
`,
agent: analyst,
});
const crew = new Crew({
agents: [analyst],
tasks: [task],
});
const result = await crew.kickoff();
// Validate before trusting it
const output = typeof result.output === "string"
? JSON.parse(result.output)
: result.output;
If your version of CrewAI supports structured output or schema binding in TypeScript, use that instead of free-form text plus JSON.parse. Free-form parsing is where most scaling failures start.
Other Possible Causes
1) Tool output is leaking into the final answer
A tool may return logs, stack traces, or non-serializable objects that get mixed into the agent response.
// Problematic tool output
return {
data,
debug: `Fetched ${items.length} items`,
};
Fix it by returning only machine-readable data.
return {
data,
};
2) Nested tasks are returning inconsistent shapes
One task returns a string, another returns an object. When downstream code assumes one shape, parsing fails.
// Inconsistent
taskA.expectedOutput = "string";
taskB.expectedOutput = "{ summary: string }";
Make every handoff explicit.
taskA.expectedOutput = `{ summary: string }`;
taskB.expectedOutput = `{ summary: string; nextAction: string }`;
3) The model is producing markdown fences around JSON
This is common when prompts say “return JSON” but do not forbid formatting.
```json
{ "summary": "...", "severity": "high" }
Strip fences if you must, but better to prevent them.
```ts
description: `
Return ONLY raw JSON.
Do not wrap in markdown fences.
Do not include \`\`\`json.
`
4) Context overflow during scaling
When you scale up task history or pass too much conversation context between agents, the model starts truncating or drifting from schema.
const crew = new Crew({
agents,
tasks,
memory: true,
});
If memory makes outputs unstable, turn it off for structured extraction tasks or trim context aggressively before handoff.
How to Debug It
- •
Inspect the raw agent output
- •Log
result.outputbefore any parsing. - •Look for markdown fences, extra prose, truncated JSON, or unexpected keys.
- •Log
- •
Reproduce with one task and one agent
- •Remove scaling variables first:
- •no memory
- •no nested delegation
- •no tools
- •If the error disappears, reintroduce pieces until it breaks again.
- •Remove scaling variables first:
- •
Check every expected output boundary
- •Verify each task’s prompt matches what downstream code expects.
- •Confirm whether a task should return:
- •plain text
- •valid JSON
- •a specific schema object
- •
Validate tool responses separately
- •Call each tool directly.
- •If a tool returns noisy text or mixed types, sanitize it before passing back to the agent.
Prevention
- •Use strict schemas for anything you plan to parse later.
- •Keep prompts explicit:
- •“Return ONLY valid JSON”
- •“No markdown”
- •“No explanation”
- •Add runtime validation at task boundaries with
zod,valibot, or equivalent. - •Treat every inter-agent handoff as an API contract, not free-form chat.
- •Test with real long-context inputs before scaling to production workloads.
If you are seeing output parsing error when scaling in CrewAI TypeScript specifically under load or multi-agent orchestration, start by assuming one agent violated its output contract. Fix the contract first; everything else becomes easier after that.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit