How to Fix 'JSON parsing error in production' in AutoGen (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-error-in-productionautogentypescript

If you’re seeing JSON parsing error in production in AutoGen TypeScript, it usually means the model returned text that your code tried to parse as JSON, but the output was not valid JSON. This shows up most often when you ask an agent to return structured data, then feed the response into JSON.parse() or a schema validator without enforcing a strict JSON-only output.

In practice, this happens in production when prompts drift, tool calls are mixed with free-form text, or the model is allowed to “explain” instead of returning machine-readable output.

The Most Common Cause

The #1 cause is simple: you asked for JSON, but did not constrain the model tightly enough, then parsed the raw assistant message directly.

This is especially common with AssistantAgent responses in AutoGen TypeScript. The model returns something like:

Sure — here's the result:
{ "status": "approved", "score": 0.92 }

That is not valid JSON because of the extra prose before the object.

Broken pattern vs fixed pattern

BrokenFixed
Parse raw assistant textExtract only the JSON payload
Allow free-form natural languageForce strict JSON output in prompt and validation
Assume every response is cleanValidate and retry on malformed output
// ❌ Broken
import { AssistantAgent } from "@autogen/agentchat";

const agent = new AssistantAgent({
  name: "parser",
  systemMessage: "Return a JSON object with status and score.",
});

const result = await agent.run("Evaluate this case.");
const text = result.messages.at(-1)?.content as string;

// This blows up when the model adds any extra text.
const data = JSON.parse(text);
console.log(data.status);
// ✅ Fixed
import { AssistantAgent } from "@autogen/agentchat";

const agent = new AssistantAgent({
  name: "parser",
  systemMessage: [
    "Return ONLY valid JSON.",
    'No markdown, no explanation, no code fences.',
    'Schema: {"status":"approved"|"rejected","score":number}',
  ].join(" "),
});

const result = await agent.run("Evaluate this case.");
const text = String(result.messages.at(-1)?.content ?? "").trim();

// Optional: strip accidental code fences if your model still emits them.
const cleaned = text
  .replace(/^```json\s*/i, "")
  .replace(/^```\s*/i, "")
  .replace(/\s*```$/i, "");

const data = JSON.parse(cleaned);
console.log(data.status);

The important change is not just JSON.parse(). It’s forcing the assistant to return one machine-readable object and nothing else.

Other Possible Causes

1) Tool output got mixed into the assistant message

If you use tools, AutoGen may return a message stream containing tool calls plus natural language. If you parse the wrong message index, you’ll hit invalid JSON.

// ❌ Parsing the last message blindly
const last = result.messages.at(-1);
JSON.parse(String(last?.content));

Use only the specific assistant content you control, or inspect message roles first.

// ✅ Filter for assistant text content only
const assistantText = result.messages
  .filter((m) => m.role === "assistant")
  .at(-1)?.content;

const data = JSON.parse(String(assistantText));

2) The model returned fenced JSON

This is common with GPT-style models. They wrap output in Markdown fences even when prompted not to.

```json
{"status":"approved","score":0.92}

Fix by stripping fences before parsing, or better, enforce stricter prompting and retry on invalid output.

### 3) Temperature is too high

Higher temperature increases formatting drift. If your agent is generating structured output, keep it low.

```ts
const agent = new AssistantAgent({
  name: "classifier",
  llmConfig: {
    temperature: 0,
  },
});

If you’re using a wrapper around OpenAI-compatible config, make sure that setting actually reaches the underlying model client.

4) Your schema expects numbers but gets strings

The payload may be valid JSON but still fail downstream validation if types are wrong.

// Model returns:
{"score":"0.92"}

That parses fine, but a Zod schema like this fails:

import { z } from "zod";

const Schema = z.object({
  status: z.enum(["approved", "rejected"]),
  score: z.number(),
});

If you need strict typing, validate after parse and reject mismatches early.

How to Debug It

  1. Log the exact raw content before parsing
    Do not log only JSON.stringify(error). Log the raw assistant content exactly as received.

    console.log("RAW OUTPUT:", JSON.stringify(text));
    
  2. Check message roles and indexes
    Confirm you are parsing an assistant message and not a tool call, system message, or user echo.

    for (const m of result.messages) {
      console.log(m.role, JSON.stringify(m.content));
    }
    
  3. Validate against a schema before business logic
    Use Zod or similar to catch type drift separately from syntax errors.

    const parsed = JSON.parse(cleaned);
    const checked = Schema.safeParse(parsed);
    if (!checked.success) throw checked.error;
    
  4. Reproduce with temperature set to zero
    If the error disappears at temperature: 0, you’re dealing with generation drift rather than a parser bug.

Prevention

  • Never parse raw assistant text without cleaning and validating it first.
  • Force strict output contracts in your system message: “Return ONLY valid JSON.”
  • Add retries with a repair step when parsing fails instead of failing the request immediately.
  • Keep structured-output agents separate from conversational agents so free-form chat doesn’t contaminate machine-readable responses.

If you want this to stay stable in production, treat LLM output like untrusted input. Parse defensively, validate aggressively, and don’t assume AutoGen will always hand you clean JSON just because your prompt asked nicely.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides