How to Fix 'JSON parsing error during development' in CrewAI (Python)
When CrewAI throws JSON parsing error during development, it usually means one of your agents, tools, or task outputs returned text that was supposed to be JSON but wasn’t valid JSON. In practice, this shows up when you ask an LLM to return structured data, then CrewAI tries to parse the response into a Python object and fails.
This is common during local development because prompts are still changing, tool outputs are inconsistent, or the model is returning extra prose around the JSON payload.
The Most Common Cause
The #1 cause is asking an agent to return JSON without enforcing a strict schema, then letting the model add commentary, markdown fences, or trailing commas.
CrewAI often surfaces this as something like:
- •
JSONDecodeError: Expecting value - •
ValueError: Failed to parse JSON output - •
crewai.utilities.converter.ConverterError - •
Task output parsing failed
Here’s the broken pattern versus the fixed pattern.
| Broken | Fixed |
|---|---|
| Loose prompt, no schema enforcement | Explicit format instructions + Pydantic model |
| Model may return markdown or prose | Model returns only valid JSON |
| Parsing happens after the fact | Validation happens at task boundary |
# BROKEN
from crewai import Agent, Task, Crew
from crewai.llm import LLM
researcher = Agent(
role="Researcher",
goal="Return customer risk data as JSON",
backstory="You analyze claims data."
)
task = Task(
description="""
Analyze the claim and return JSON with fields:
name, risk_score, reason
""",
expected_output="JSON object"
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)
# FIXED
from pydantic import BaseModel, Field
from crewai import Agent, Task, Crew
class RiskReport(BaseModel):
name: str = Field(..., description="Customer name")
risk_score: int = Field(..., ge=0, le=100)
reason: str
researcher = Agent(
role="Researcher",
goal="Return customer risk data in valid JSON only",
backstory="You analyze claims data."
)
task = Task(
description="""
Analyze the claim and return ONLY valid JSON.
Do not include markdown fences or extra text.
""",
expected_output=RiskReport.model_json_schema()
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)
The key difference is that the second version gives CrewAI a schema it can validate against. Without that, the model often returns something like:
Here is the result:
{
"name": "John Doe",
"risk_score": 82,
"reason": "Multiple prior claims"
}
That first line is enough to break parsing.
Other Possible Causes
1. Markdown fences around JSON
Some models love wrapping output in triple backticks. That looks fine to humans and fails for strict parsing.
# BAD OUTPUT
```json
{"status": "approved"}
Fix by telling the agent to return raw JSON only.
```python
task = Task(
description="Return ONLY raw JSON. No ```json fences.",
expected_output='{"status": "approved"}'
)
2. Trailing commas or invalid quoting
Python dict syntax is not JSON syntax. If your tool returns Python-style strings or dangling commas, parsing fails.
# BAD TOOL OUTPUT
"{'name': 'Alice', 'score': 91,}"
Valid JSON must use double quotes and no trailing comma.
# GOOD TOOL OUTPUT
'{"name": "Alice", "score": 91}'
If you build tool output manually, use json.dumps().
import json
payload = {"name": "Alice", "score": 91}
return json.dumps(payload)
3. Tool returns non-JSON text before/after payload
A custom tool may prepend logs or status text.
def fetch_claim_data():
return "Fetched successfully\n{\"claim_id\": \"C123\", \"amount\": 4500}"
That will break downstream parsing. Return only structured output.
def fetch_claim_data():
return {"claim_id": "C123", "amount": 4500}
If CrewAI expects a string response from the tool interface, serialize only the payload:
import json
def fetch_claim_data():
return json.dumps({"claim_id": "C123", "amount": 4500})
4. LLM temperature too high for structured output
High temperature increases formatting drift. For structured tasks, keep it low.
from crewai.llm import LLM
llm = LLM(
model="gpt-4o-mini",
temperature=0.7 # more likely to drift from strict JSON
)
Use a lower temperature for deterministic formatting.
llm = LLM(
model="gpt-4o-mini",
temperature=0.0
)
5. Schema mismatch between prompt and validator
If your prompt asks for riskScore but your Pydantic model expects risk_score, validation fails even if the output is valid JSON.
class RiskReport(BaseModel):
risk_score: int
# Prompt asks for:
# { "riskScore": 80 }
Make field names match exactly across prompt, schema, and consuming code.
How to Debug It
- •
Print the raw agent output before parsing
Don’t inspect only the final exception. Log the exact string returned by the agent or tool.print(repr(raw_output)) - •
Validate with Python’s json module first
Ifjson.loads()fails locally, CrewAI will fail too.import json json.loads(raw_output) - •
Check for fences, prose, and hidden characters
Look for:- •
- •leading explanations like “Sure, here’s the result”
- •trailing commas
- •single quotes instead of double quotes
- •
- •
Reduce to one task and one tool
Strip your crew down to a minimal reproduction:- •one agent
- •one task
- •one known-good input
If it works there but fails in your full flow, the problem is usually a tool output or a downstream transform.
Prevention
- •Use Pydantic models or explicit schemas for every structured task.
- •Tell agents to return only raw JSON, no markdown fences and no explanation text.
- •Keep
temperature=0for tasks that must be machine-readable. - •Make custom tools return Python dicts or
json.dumps()output only. - •Add a pre-parse validation step in tests with
json.loads()before handing results to CrewAI consumers.
If you’re building agents for banking or insurance workflows, treat JSON formatting as part of your contract boundary. Loose prompting works until it hits production data and breaks on one stray sentence from the model.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit