How to Fix 'JSON parsing error in production' in CrewAI (Python)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-error-in-productioncrewaipython

What the error means

JSON parsing error in production in CrewAI usually means one of your agents returned text that looked like JSON, but wasn’t valid JSON when CrewAI tried to parse it into a structured output. This typically shows up when you use output_pydantic, output_json, or any downstream code that expects a strict JSON object.

In practice, it happens when the model adds markdown fences, commentary, trailing commas, single quotes, or plain English around the payload.

The Most Common Cause

The #1 cause is asking the LLM for JSON but not constraining the output tightly enough. CrewAI then receives something like json ... or a response with extra text, and json.loads() blows up.

Here’s the broken pattern and the fixed pattern side by side:

BrokenFixed
```python
from crewai import Agent, Task, Crew
from pydantic import BaseModel

class LeadScore(BaseModel): score: int reason: str

agent = Agent( role="Analyst", goal="Score leads", backstory="You are precise." )

task = Task( description="Return JSON with score and reason for this lead.", agent=agent, output_json=LeadScore )

crew = Crew(agents=[agent], tasks=[task]) result = crew.kickoff() print(result) |python from crewai import Agent, Task, Crew from pydantic import BaseModel, Field

class LeadScore(BaseModel): score: int = Field(..., ge=0, le=100) reason: str

agent = Agent( role="Analyst", goal="Score leads", backstory="You are precise and only return valid JSON." )

task = Task( description=( "Return ONLY valid JSON matching this schema:\n" '{"score": 0-100, "reason": "short explanation"}\n' "No markdown fences. No extra text." ), agent=agent, output_pydantic=LeadScore )

crew = Crew(agents=[agent], tasks=[task]) result = crew.kickoff() print(result.pydantic)


Why this works:

- `output_pydantic` gives CrewAI a schema to validate against.
- The prompt explicitly says “ONLY valid JSON”.
- The Pydantic model enforces types and bounds.

If you’re seeing errors like:

- `JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
- `pydantic_core._pydantic_core.ValidationError`
- `crewai.utilities.json_utils` failures during parsing

then your model output is not clean JSON.

## Other Possible Causes

### 1. Markdown fences in the response

The model returns fenced JSON instead of raw JSON.

```python
# Bad prompt
description="Return your answer as JSON."

# Typical bad model output
"""
```json
{"score": 87, "reason": "Strong fit"}

"""


Fix it by forbidding fences in the prompt and using structured output.

```python
description=(
    "Return ONLY raw JSON. "
    "Do not wrap it in ```json fences."
)

2. Trailing commas or invalid quotes

This is common when the model imitates JavaScript object syntax instead of strict JSON.

# Invalid JSON
{
  "score": 87,
  "reason": "Strong fit",
}

JSON does not allow trailing commas. It also requires double quotes for keys and strings.

3. You’re parsing free-text from an agent tool result

If a tool returns plain text and you pass that directly into a parser expecting JSON, CrewAI will fail later in the pipeline.

tool_output = "Lead scored as strong fit"

# Bad: assuming this is JSON
data = json.loads(tool_output)

Fix by returning a dict from tools or normalizing before parsing.

tool_output = {"score": 87, "reason": "Strong fit"}
data = tool_output  # no json.loads needed

4. Schema mismatch between prompt and model

Your prompt asks for one shape, but your Pydantic model expects another.

class LeadScore(BaseModel):
    score: int
    rationale: str   # model expects rationale

# Prompt says:
# {"score": 87, "reason": "..."}  <-- mismatch

Keep field names identical across prompt examples and schema definitions.

How to Debug It

  1. Print the raw agent output before parsing

    • Don’t inspect only result.pydantic.
    • Check result.raw or intermediate task output if available.
    • Look for fences, extra prose, or malformed braces.
  2. Validate the exact string with Python’s parser

    import json
    
    raw = result.raw  # or captured task output
    print(repr(raw))
    json.loads(raw)
    

    If this fails locally, CrewAI isn’t the problem. The payload is invalid JSON.

  3. Compare output against your schema

    • Confirm every field exists.
    • Confirm types match.
    • Confirm optional fields are actually optional in Pydantic.
    • Watch for enum values that don’t match exactly.
  4. Turn on stricter prompting

    • Add “ONLY valid JSON”.
    • Add one concrete example.
    • Remove vague language like “format nicely” or “structured response”.
    • If needed, reduce temperature to make outputs less creative.

Prevention

  • Use output_pydantic instead of hand-parsing free-form text whenever possible.
  • Put a strict contract in both places:
    • Pydantic schema
    • Prompt instructions
  • Add a small validation layer before downstream consumers:
    • reject fenced code blocks
    • reject non-JSON text
    • log raw outputs on failure

A good production pattern is to treat every LLM response as untrusted input until it passes schema validation. That’s how you keep JSON parsing error in production from becoming a recurring incident instead of a one-off bug.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides