How to Fix 'JSON parsing error in production' in CrewAI (TypeScript)
If you’re seeing JSON parsing error in production in CrewAI TypeScript, it usually means one of your agents returned text that looked human-readable but was supposed to be machine-readable JSON. The failure often shows up when a task expects structured output, tool output, or an LLM response to be parsed into an object, but the model adds markdown, commentary, or invalid JSON syntax.
In practice, this happens after a prompt change, a model swap, or when a tool returns loosely formatted text. The stack trace usually points at a parser in CrewAgentExecutor, TaskOutput, or a downstream JSON.parse() call.
The Most Common Cause
The #1 cause is asking the model for JSON but not forcing strict JSON-only output.
Here’s the broken pattern:
| Broken | Fixed |
|---|---|
| Model is told “return JSON” but can still add prose | Prompt explicitly demands raw JSON only |
| Parsing happens on free-form text | Parsing happens on validated structured output |
| No schema guardrails | Output shape is constrained |
// BROKEN
import { Agent, Task, Crew } from "crewai";
const analyst = new Agent({
role: "Analyst",
goal: "Summarize customer risk",
backstory: "You produce concise reports.",
});
const task = new Task({
description: `
Return JSON with fields:
- riskLevel
- reason
Analyze this customer:
${JSON.stringify(customer)}
`,
agent: analyst,
});
const crew = new Crew({
agents: [analyst],
tasks: [task],
});
const result = await crew.kickoff();
// This often fails if the model returns:
// "Here is the JSON:\n{ ... }"
const parsed = JSON.parse(result.toString());
// FIXED
import { Agent, Task, Crew } from "crewai";
const analyst = new Agent({
role: "Analyst",
goal: "Summarize customer risk",
backstory: "You produce concise reports.",
});
const task = new Task({
description: `
Return ONLY valid JSON.
No markdown.
No explanation.
No code fences.
Schema:
{
"riskLevel": "low" | "medium" | "high",
"reason": string
}
Analyze this customer:
${JSON.stringify(customer)}
`,
agent: analyst,
});
const crew = new Crew({
agents: [analyst],
tasks: [task],
});
const result = await crew.kickoff();
const output = typeof result === "string" ? result : result.toString();
const parsed = JSON.parse(output);
If you’re using structured outputs in your own wrapper, prefer validating before parsing:
import { z } from "zod";
const RiskSchema = z.object({
riskLevel: z.enum(["low", "medium", "high"]),
reason: z.string(),
});
const parsed = RiskSchema.parse(JSON.parse(output));
Other Possible Causes
1) Tool output is not valid JSON
A tool may return logs, stack traces, or plain text instead of structured data.
// BROKEN TOOL OUTPUT
return `status=ok; score=0.92`;
// FIXED TOOL OUTPUT
return JSON.stringify({
status: "ok",
score: 0.92,
});
If the agent passes tool output directly into a parser, one bad delimiter is enough to trigger:
- •
Unexpected token s in JSON at position 0 - •
CrewAgentExecutor failed to parse task output
2) You are wrapping JSON in markdown fences
LLMs love returning:
{
"riskLevel": "high"
}
That is not raw JSON if your parser expects the fences removed first.
// BROKEN PROMPT
Return your answer as ```json blocks.
// FIXED PROMPT
Return raw JSON only. Do not use markdown code fences.
If you can’t control the model perfectly, strip fences before parsing:
function stripCodeFences(text: string) {
return text.replace(/^```json\s*/i, "").replace(/```$/i, "").trim();
}
3) Your model/provider is truncating the response
This happens when maxTokens is too low or the provider cuts off mid-object.
const agent = new Agent({
role: "Extractor",
goal: "Return structured customer data",
model: "gpt-4o-mini",
maxTokens: 100,
});
Symptoms:
- •Output ends with
{ - •Missing closing braces
- •Parser errors like
Unexpected end of JSON input
Fix by increasing token budget and reducing prompt size.
4) Mixed content from retries or streaming
If you concatenate partial chunks from streaming responses, you can end up with malformed payloads.
// BROKEN
let fullText = "";
for await (const chunk of stream) {
fullText += chunk; // may include partial tokens or duplicate segments
}
JSON.parse(fullText);
Use the final assembled message from the SDK rather than raw chunks unless you control chunk boundaries carefully.
How to Debug It
- •
Log the exact raw output before parsing
Don’t inspect the object after transformation. Print the string that reachesJSON.parse()or your CrewAI parser.console.log("RAW OUTPUT:", output); - •
Check whether the failure is in the agent or in your wrapper
If CrewAI returns a string like"Here’s the analysis...", your prompt is weak. If CrewAI returns valid text but your code parses it incorrectly, fix your wrapper. - •
Validate for common corruption patterns Look for:
- •leading/trailing markdown fences
- •extra prose before
{ - •single quotes instead of double quotes
- •trailing commas
- •truncated objects
- •
Reproduce with a minimal prompt Remove tools, memory, and multi-step tasks. Keep only one agent and one task with a tiny schema. If it works there, the issue is usually tool output or context contamination.
Prevention
- •Use strict schemas for every task that expects structured data.
- •Tell agents to return raw JSON only, not “JSON formatted text.”
- •Validate outputs with
zodbefore they enter business logic. - •Keep tool outputs machine-readable too; don’t mix logs and payloads.
- •Raise token limits when responses are near truncation thresholds.
If you want this class of bug to disappear in production, treat every LLM boundary like an untrusted API boundary. Parse late, validate early enough to fail fast, and never assume “looks like JSON” means “is valid JSON.”
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit