How to Fix 'JSON parsing error' in AutoGen (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-errorautogentypescript

What the error means

If you’re seeing JSON parsing error in AutoGen TypeScript, it usually means one of the agents tried to parse model output as JSON and got back something that was not valid JSON. In practice, this shows up when you use structured output, tool calling, or message transforms that expect strict JSON, but the LLM returns extra text, malformed quotes, trailing commas, or a plain-English answer.

The stack trace often points at JSON.parse(...), response_format, or an AutoGen runtime class like AssistantAgent, OpenAIChatCompletionClient, or a tool executor expecting structured arguments.

The Most Common Cause

The #1 cause is asking the model for JSON but not forcing a strict JSON-only response. The model adds markdown fences, commentary, or partially formatted objects, and AutoGen tries to parse it anyway.

Here’s the broken pattern versus the fixed pattern.

BrokenFixed
Model instructed to “return JSON” in plain textUse structured output / strict schema expectations
Response contains markdown fences or proseResponse is raw JSON only
Parser fails with JSON parsing errorParser receives valid JSON every time
// BROKEN: the model may return prose + JSON, which breaks parsing
import { AssistantAgent } from "@autogen/core";

const agent = new AssistantAgent({
  name: "support-agent",
  systemMessage: `
Return a JSON object with keys:
- status
- reason

Example:
{ "status": "ok", "reason": "matched" }
`,
});

// Somewhere later:
const result = await agent.run("Check if customer is eligible.");
// If the model replies with:
// "Sure — here's the result:\n```json\n{ ... }\n```"
// AutoGen may throw: JSON parsing error
// FIXED: force structured output with a schema-aware client or strict tool contract
import { AssistantAgent } from "@autogen/core";
import { z } from "zod";

const EligibilitySchema = z.object({
  status: z.enum(["ok", "not_ok"]),
  reason: z.string(),
});

const agent = new AssistantAgent({
  name: "support-agent",
  systemMessage:
    "Return only valid JSON matching the provided schema. No markdown, no explanation.",
  // depending on your AutoGen setup, wire schema/structured output through the model client
});

const raw = await agent.run("Check if customer is eligible.");

// validate before using it
const parsed = EligibilitySchema.parse(JSON.parse(raw.content));

The important part is not just “tell it to output JSON.” You need either:

  • a strict structured-output path supported by your model client
  • validation before consumption
  • retries when the response is malformed

If you’re using OpenAI-compatible clients in AutoGen TS, make sure your client config actually supports structured responses. A prompt alone is not enough.

Other Possible Causes

1) Tool arguments are malformed

When an agent calls a tool, AutoGen parses arguments as JSON. If the model emits invalid tool args, you’ll see failures like Failed to parse function call arguments or a generic JSON parsing error.

// BROKEN tool definition expectation:
// model sends: { query: customer id }   // invalid JSON

// FIXED: tool schema should be explicit and narrow
const tools = [
  {
    name: "lookupCustomer",
    description: "Look up customer by ID",
    parameters: {
      type: "object",
      properties: {
        customerId: { type: "string" },
      },
      required: ["customerId"],
      additionalProperties: false,
    },
  },
];

2) You’re double-stringifying JSON

This happens when you call JSON.stringify() on something that is already a stringified payload. The parser then receives escaped quotes instead of an object.

// BROKEN
const payload = JSON.stringify(JSON.stringify({ status: "ok" }));

// FIXED
const payload = JSON.stringify({ status: "ok" });

In agent code, this often appears when building messages:

// BROKEN
messages.push({
  role: "assistant",
  content: JSON.stringify(responseText), // responseText is already a string
});

3) The response includes markdown fences

A lot of models return:

{
  "status": "ok"
}

That looks fine to humans and fails for parsers expecting raw JSON text.

// BROKEN prompt instruction:
// "Respond with JSON"

// BETTER prompt instruction:
// "Respond with raw JSON only. Do not wrap in ```json fences."

If you control post-processing, strip fences before parsing:

function stripCodeFences(text: string) {
  return text.replace(/^```json\s*/i, "").replace(/```$/i, "").trim();
}

4) Your provider returned non-JSON due to temperature or fallback behavior

Higher temperature increases formatting drift. Some providers also fall back to plain text when structured output isn’t supported for that endpoint.

// Safer config for structured responses
const llmConfig = {
  model: "gpt-4o-mini",
  temperature: 0,
};

Also verify you’re not hitting a cheaper endpoint that ignores schema constraints.

How to Debug It

  1. Log the exact raw model output

    • Don’t log the parsed object.
    • Log message.content or tool-call arguments before any transformation.
    • If you see prose, fences, or trailing commas, you found the issue.
  2. Check whether the failure happens in normal chat or tool calling

    • If it fails during assistant response handling, inspect prompt/output formatting.
    • If it fails during function execution, inspect tool argument schemas and emitted args.
  3. Validate with a strict parser locally

    • Take the raw text and run JSON.parse(...) yourself.
    • If that fails outside AutoGen too, the issue is definitely malformed output.
  4. Reduce to one agent and one turn

    • Remove memory, transforms, multiple agents, and tools.
    • Reproduce with a single AssistantAgent call.
    • If it works there but fails in your full flow, one of your middleware layers is mutating content.

Prevention

  • Use schema validation on every structured response.

    • In TypeScript, pair AutoGen output with zod or another validator before business logic touches it.
  • Keep temperature at 0 for any response that must be machine-readable.

    • Save creative sampling for free-form text tasks only.
  • Make prompts explicit about format constraints.

    • Say “raw JSON only” and ban markdown fences if your downstream parser expects plain JSON.

If you’re building production agents in banking or insurance workflows, treat LLM output like untrusted input. Parse defensively, validate aggressively, and never assume “looks like JSON” means “is valid JSON.”


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides