How to Fix 'output parsing error' in CrewAI (TypeScript)
If you’re seeing output parsing error in CrewAI TypeScript, it usually means the agent returned text that did not match the structured shape your task expected. In practice, this shows up when you ask for JSON, a typed object, or a schema-backed response, and the model adds extra prose, markdown, or malformed JSON.
This is common when using Task.output, expectedOutput, Pydantic-style schemas, or any downstream parser that expects strict structure. The failure often bubbles up as something like Error: Output parsing error or Could not parse LLM output into the expected format.
The Most Common Cause
The #1 cause is asking for structured output without enforcing a strict format in the prompt and task config.
CrewAI does not magically coerce free-form LLM text into your TypeScript type. If your agent says “Sure, here’s the JSON:” and then wraps the payload in markdown fences or adds commentary, parsing fails.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Agent returns prose + JSON | Agent returns raw JSON only |
| Task expects structure but prompt is vague | Prompt explicitly forbids extra text |
| No parser-safe instructions | Clear schema and format constraints |
// ❌ Broken
import { Agent, Task } from "crewai";
const agent = new Agent({
role: "Support Analyst",
goal: "Summarize customer complaints",
backstory: "You are precise and concise.",
});
const task = new Task({
description: "Return customer complaint data as JSON.",
expectedOutput: "Valid JSON with fields: issue, severity, summary",
agent,
});
// Model may return:
// "Sure — here's the JSON:\n```json\n{...}\n```"
// ✅ Fixed
import { Agent, Task } from "crewai";
const agent = new Agent({
role: "Support Analyst",
goal: "Summarize customer complaints",
backstory: "You are precise and concise.",
});
const task = new Task({
description: `
Return ONLY valid JSON.
No markdown.
No explanation.
No code fences.
Schema:
{
"issue": string,
"severity": "low" | "medium" | "high",
"summary": string
}
`,
expectedOutput:
'Raw JSON object with keys issue, severity, summary. No extra text.',
agent,
});
The fix is not just “be more specific.” It is to remove any ambiguity about whether the model should produce natural language or machine-readable output.
Other Possible Causes
1. Your schema is stricter than your prompt
If your TypeScript interface expects a number but the model returns "3" as a string, parsing can fail.
type ComplaintResult = {
severity: number; // expects number
};
// Model returns:
// { "severity": "3" }
Fix by aligning prompt and schema:
description: `
Return valid JSON.
severity must be a number from 1 to 5.
`
2. Markdown fences are contaminating the payload
A lot of models default to wrapping JSON in triple backticks. That looks fine to humans and breaks parsers expecting raw JSON.
// ❌ Bad output
```json
{ "issue": "Login failure" }
Force raw output:
```ts
description: `
Return only raw JSON.
Do not use markdown fences.
Do not prefix with 'Here is'.
`
3. Tool output is being mixed into final answer
If an agent uses tools and then summarizes their result in prose, the final response may no longer match your parser.
const agent = new Agent({
role: "Claims Assistant",
goal: "Extract claim details",
tools: [claimsLookupTool],
});
Fix by separating tool use from final formatting:
description: `
Use tools if needed.
Final answer must be raw JSON only.
`
4. The model hit a truncation or token limit
If the response gets cut off mid-object, parsers fail with malformed JSON errors that surface as parsing issues.
Watch for outputs ending like this:
{
"issue": "Policy renewal failed",
"severity": "
Fix by increasing max tokens or reducing output size:
const agent = new Agent({
role: "Ops Analyst",
goal: "Return compact structured summaries",
llmConfig: {
maxTokens: 800,
temperature: 0,
},
});
How to Debug It
- •
Log the raw LLM output
- •Don’t inspect only the parsed result.
- •Print exactly what CrewAI received before parsing.
- •
Check whether the response contains extra text
- •Look for phrases like:
- •
Here’s the JSON - •Markdown fences
- •Bullet points before/after the object
- •
- •Look for phrases like:
- •
Validate against your expected schema manually
- •Copy the raw output into a JSON validator.
- •If it fails there, it will fail in CrewAI too.
- •
Reduce the task to one field
- •Start with something trivial:
description: 'Return {"ok": true} only.' - •If that works, add fields back one at a time until it breaks.
- •Start with something trivial:
Prevention
- •
Use explicit formatting rules in every structured-output task:
- •“Return only raw JSON”
- •“No markdown”
- •“No explanation”
- •
Keep temperature low for parser-sensitive tasks:
- •
temperature: 0reduces creative drift
- •
- •
Prefer small schemas over large ones:
- •The more fields you ask for, the more likely one comes back malformed
If you’re still hitting output parsing error, assume one of two things first: either the model added extra text, or your schema doesn’t match what you asked for. In CrewAI TypeScript, that’s usually where the bug lives.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit