How to Fix 'JSON parsing error during development' in CrewAI (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-error-during-developmentcrewaitypescript

Opening

JSON parsing error during development in CrewAI usually means one thing: the model returned text that CrewAI expected to be valid JSON, but it wasn’t. In TypeScript projects, this shows up most often when you wire an agent or task to a structured output contract and the LLM responds with markdown, commentary, or malformed JSON.

You’ll typically hit this during local development when testing Task.output_json, Task.output_pydantic, or any flow that expects strict structured output from a CrewAgentExecutor run.

The Most Common Cause

The #1 cause is asking the model for JSON without enforcing a strict schema or parser-safe prompt. CrewAI then tries to parse something like:

  • fenced code blocks
  • trailing commas
  • single quotes
  • explanatory text before/after the JSON

Here’s the broken pattern versus the fixed pattern.

BrokenFixed
output_json is set, but the prompt allows free-form prosePrompt explicitly says “return only valid JSON” and schema is defined
Model returns ```json fences or extra textOutput is constrained to a Pydantic schema or strict object shape
// ❌ Broken: prompt is vague, model often returns prose + JSON
import { Agent, Task, Crew } from "crewai";
import { z } from "zod";

const agent = new Agent({
  role: "Support Analyst",
  goal: "Summarize incidents",
  backstory: "You write concise incident summaries.",
});

const task = new Task({
  description: `
    Summarize this ticket and return JSON with status and summary.
    Ticket: ${ticketText}
  `,
  agent,
  outputJson: true,
});

// CrewAI may throw something like:
// Error: JSON parsing error during development
// Failed to parse LLM response as JSON
// ✅ Fixed: strict schema + explicit instruction to return raw JSON only
import { Agent, Task, Crew } from "crewai";
import { z } from "zod";

const IncidentSchema = z.object({
  status: z.enum(["open", "closed", "pending"]),
  summary: z.string(),
});

const agent = new Agent({
  role: "Support Analyst",
  goal: "Summarize incidents",
  backstory: "You write concise incident summaries.",
});

const task = new Task({
  description: `
Return ONLY valid JSON.
No markdown.
No code fences.
No explanation.

Schema:
{
  "status": "open" | "closed" | "pending",
  "summary": string
}

Ticket:
${ticketText}
`,
  agent,
  outputPydantic: IncidentSchema,
});

The important part is not just “ask for JSON.” It’s making the contract impossible to misread. If your output needs structure, use a schema-backed path instead of hoping the model behaves.

Other Possible Causes

1) You are using the wrong output mode for the data shape

If you use outputJson for nested or typed data but don’t validate it properly, CrewAI may accept the response format at runtime and fail during parsing.

// Risky
new Task({
  description: "Return user profile as JSON",
  agent,
  outputJson: true,
});

// Better
new Task({
  description: "Return user profile following this schema only",
  agent,
  outputPydantic: UserProfileSchema,
});

2) The model is returning markdown fences

This happens a lot with GPT-style models. They answer with:

{
  "summary": "..."
}

That is not always accepted by parsers expecting raw JSON.

// Prompt fix
description: `
Return raw JSON only.
Do not wrap in \`\`\`json.
Do not add commentary.
`

3) Your prompt includes unescaped user input

If you inject ticket text, emails, or logs directly into the prompt and they contain braces, quotes, or partial JSON, you can confuse the model enough to break parsing.

const safeTicket = JSON.stringify(ticketText);

description: `
Analyze this input:
${safeTicket}
`

If the payload is large or messy, wrap it in delimiters and tell the model not to interpret it as instructions.

4) You’re hitting a provider-side formatting issue

Some models are simply worse at strict structured output. If your OpenAI-compatible endpoint or local model drifts from schema compliance under temperature > 0, parsing breaks.

const agent = new Agent({
  role: "Extractor",
  goal: "Extract fields exactly",
  backstory: "...",
  llm: {
    model: "gpt-4o-mini",
    temperature: 0,
  },
});

Lower temperature first. If that fixes it, your issue is generation variability, not CrewAI itself.

5) Your post-processing assumes valid JSON before checking raw output

A lot of TypeScript code does this:

const result = await crew.kickoff();
const parsed = JSON.parse(result.raw); // crashes immediately if raw isn't pure JSON

Inspect result.raw before parsing. In many cases the LLM already returned a near-miss response like extra text plus valid object content.

How to Debug It

  1. Log the raw LLM output before any parsing

    • Print result.raw, not just transformed objects.
    • Look for markdown fences, prefixes like “Sure — here’s the JSON”, or trailing text.
  2. Check whether you’re using outputJson or outputPydantic

    • If you need typed data, prefer schema enforcement.
    • If you only want plain text with manual parsing later, don’t pretend it’s structured output.
  3. Set temperature to zero

    • This removes a lot of formatting drift.
    • If errors disappear at temperature: 0, your prompt is too loose or your model too creative for structured extraction.
  4. Reduce the prompt to a minimal reproduction

    • Remove tool calls, memory, multi-agent handoffs, and long context.
    • Keep only one task returning one small object.
    • If it works in isolation, another layer is corrupting the format.

Prevention

  • Use schema-backed outputs for anything machine-read later:
    • outputPydantic over free-form text plus JSON.parse.
  • Tell the model exactly what not to do:
    • “Return raw JSON only.”
    • “No markdown.”
    • “No explanation.”
  • Keep generation deterministic for extraction tasks:
    • low temperature
    • short prompts
    • narrow schemas

If you want fewer parse failures in CrewAI TypeScript projects, treat structured output as an interface contract, not a suggestion. The moment you rely on “the model should know what JSON looks like,” you’re already building a parser bug.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides