How to Fix 'JSON parsing error' in CrewAI (Python)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-errorcrewaipython

What the error means

JSON parsing error in CrewAI usually means one of your agents or tools returned text that CrewAI expected to be valid JSON, but it wasn’t. You’ll typically hit this when using structured outputs, tool calls, or an LLM response that includes extra prose around the payload.

The failure often shows up as a ValueError, a Pydantic validation issue, or a tool execution error inside crewai when the framework tries to parse a model response into a Python object.

The Most Common Cause

The #1 cause is asking the LLM for JSON but not forcing a strict JSON-only response. The model returns something like:

  • markdown fences
  • explanatory text
  • trailing commas
  • single quotes instead of double quotes

CrewAI then tries to parse it and blows up.

Broken vs fixed pattern

BrokenFixed
LLM returns “Here is the JSON:” plus payloadLLM returns raw JSON only
Prompt is vague about formatPrompt explicitly says “return valid JSON only”
No output schema validationStructured output with output_json or Pydantic model
# BROKEN
from crewai import Agent, Task, Crew

agent = Agent(
    role="Data Extractor",
    goal="Extract customer info as JSON",
    backstory="You extract structured data from text.",
)

task = Task(
    description="""
    Extract the customer's name and account number.
    Return JSON.
    """,
    agent=agent,
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
# FIXED
from pydantic import BaseModel
from crewai import Agent, Task, Crew

class CustomerInfo(BaseModel):
    name: str
    account_number: str

agent = Agent(
    role="Data Extractor",
    goal="Extract customer info as strict JSON",
    backstory="You extract structured data from text.",
)

task = Task(
    description="""
    Extract the customer's name and account number.

    Return ONLY valid JSON matching this schema:
    {
      "name": "string",
      "account_number": "string"
    }

    No markdown, no explanation, no code fences.
    """,
    agent=agent,
    output_json=CustomerInfo,
)

crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result.json())

If you’re using output_json, make sure the model actually follows it. If the prompt is loose, you’ll still get errors like:

  • ValueError: Invalid JSON
  • json.decoder.JSONDecodeError: Expecting value
  • pydantic_core._pydantic_core.ValidationError

Other Possible Causes

1) Your tool returns non-JSON text

If you built a custom tool and it returns plain text while CrewAI expects structured data, parsing fails.

# BAD TOOL OUTPUT
def lookup_customer():
    return "Customer found: John Doe, account 12345"
# GOOD TOOL OUTPUT
def lookup_customer():
    return {
        "name": "John Doe",
        "account_number": "12345"
    }

If your tool is meant to feed structured output into downstream tasks, return dictionaries or properly serialized JSON.

2) You wrapped JSON in markdown fences

This is common with LLMs. The response looks valid to humans but not to parsers.

```json
{"name": "John", "account_number": "12345"}

Fix it by telling the agent not to use fences:

```python
description = """
Return ONLY raw JSON.
Do not include ```json fences.
Do not add any commentary.
"""

3) Single quotes instead of double quotes

Python dict syntax is not JSON syntax. This breaks parsing immediately.

# INVALID JSON
{'name': 'John', 'account_number': '12345'}

Valid JSON must use double quotes:

{
  "name": "John",
  "account_number": "12345"
}

This often happens when a model imitates Python syntax instead of strict JSON.

4) A downstream task expects one shape, but gets another

If one task outputs a list and another expects an object, you’ll get parsing or validation errors.

# EXPECTED: object
class CustomerInfo(BaseModel):
    name: str
    account_number: str

# ACTUAL OUTPUT: list of objects
[
  {"name": "John", "account_number": "12345"},
  {"name": "Jane", "account_number": "67890"}
]

Align your schema with the actual output shape. If you need multiple records, define:

from pydantic import BaseModel
from typing import List

class CustomerInfo(BaseModel):
    name: str
    account_number: str

class CustomerBatch(BaseModel):
    customers: List[CustomerInfo]

How to Debug It

  1. Print the raw model output before parsing

    Don’t inspect the parsed object first. Inspect the exact string returned by the agent or tool.

    raw = result.raw if hasattr(result, "raw") else str(result)
    print(repr(raw))
    
  2. Validate the payload outside CrewAI

    Copy the raw output into a local Python script and test it with json.loads().

    import json
    
    payload = '{"name":"John","account_number":"12345"}'
    print(json.loads(payload))
    

    If this fails locally, CrewAI will fail too.

  3. Check whether your prompt allows extra text

    Look for phrases like:

    • “Explain your answer”
    • “Provide context”
    • “Use markdown if helpful”

    Those are poison for strict JSON tasks. Remove them and demand raw JSON only.

  4. Verify schema alignment

    If you’re using Pydantic models or output_json, confirm field names and types match exactly.

    • accountNumber vs account_number
    • int vs str
    • object vs list

Prevention

  • Use output_json or Pydantic models for anything that must be machine-readable.
  • Add explicit prompt constraints:
    • “Return ONLY valid JSON”
    • “No markdown”
    • “No explanation”
  • Test every custom tool return value before wiring it into a CrewAI workflow.
  • Keep schemas narrow. The more fields you ask for, the more likely the model drifts from valid JSON.

If you keep seeing JSON parsing error, treat it as an interface contract problem, not an LLM problem. Somewhere in your pipeline, something promised structured data and delivered prose instead.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides