How to Fix 'JSON parsing error' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
json-parsing-errorlanggraphtypescript

What the error means

JSON parsing error in LangGraph usually means one node returned data that LangGraph tried to treat as structured state, but the value was not valid JSON-compatible output. In TypeScript, this shows up a lot when a model response is passed through a parser, or when you return a string where the graph expects an object.

The failure typically happens at node boundaries: after an LLM call, during JsonOutputParser parsing, or when StateGraph merges node outputs into typed state.

The Most Common Cause

The #1 cause is returning raw text from a node that should return an object matching your graph state.

If your state is typed as an object, LangGraph expects the node to return a partial state object. Returning a string like "approved" or "{"status":"approved"}" often triggers errors such as:

  • Error: Failed to parse JSON
  • SyntaxError: Unexpected token ... in JSON at position ...
  • InvalidUpdateError: Expected object, got string

Broken vs fixed pattern

BrokenFixed
Returns raw stringReturns structured object
Parser gets text it cannot safely mergeGraph receives valid partial state
Common with await model.invoke()Common with explicit mapping to state shape
import { StateGraph } from "@langchain/langgraph";
import { z } from "zod";

const StateSchema = z.object({
  status: z.string().optional(),
  reason: z.string().optional(),
});

type State = z.infer<typeof StateSchema>;

// ❌ Broken: returns a string instead of a partial state object
async function classifyNode(_: State) {
  const result = "approved";
  return result;
}

// ✅ Fixed: return an object that matches the graph state
async function classifyNodeFixed(_: State) {
  const result = "approved";
  return { status: result };
}

If you are using an LLM, make sure you parse its output before returning it to the graph:

import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const parser = new JsonOutputParser<{ status: string }>();

// ❌ Broken: model output is plain text, not guaranteed JSON
async function nodeBroken() {
  const msg = await model.invoke("Return approved or rejected");
  return msg.content; // often a string, not valid JSON for downstream parsing
}

// ✅ Fixed: force structured output or parse explicitly
async function nodeFixed() {
  const chain = model.pipe(parser);
  const parsed = await chain.invoke("Return {\"status\":\"approved\"}");
  return parsed; // { status: "approved" }
}

Other Possible Causes

1. You are parsing model output that includes markdown fences

LLMs love wrapping JSON in ```json fences. That is not valid JSON for JSON.parse().

// ❌ Broken
const text = "```json\n{\"status\":\"approved\"}\n```";
JSON.parse(text); // SyntaxError

// ✅ Fixed
const cleaned = text.replace(/^```json\s*|\s*```$/g, "");
JSON.parse(cleaned);

2. Your prompt does not force strict JSON

If the prompt says “respond in JSON” but does not constrain format, the model may return prose.

// ❌ Broken prompt
const prompt = "Classify this request and respond in JSON.";

// ✅ Better prompt
const prompt =
  'Return ONLY valid JSON with this schema: {"status":"approved"|"rejected","reason":"string"}';

With LangChain/LangGraph, prefer structured output over prompt-only discipline.

3. Your Zod schema and returned object do not match

A mismatch between your schema and actual output can surface as parsing errors or invalid updates.

const Schema = z.object({
  status: z.enum(["approved", "rejected"]),
});

// ❌ Broken
const output = { status: "pending" }; // fails schema validation

// ✅ Fixed
const output = { status: "approved" };

If you use Annotation.Root or typed state reducers in LangGraph, the same rule applies: keys and types must line up exactly.

4. You are returning AIMessage.content without checking its shape

Depending on the provider and settings, .content can be a string, array of blocks, or tool-call payload.

// ❌ Broken assumption
const msg = await model.invoke("...");
return JSON.parse(msg.content as string);

// ✅ Safer handling
if (typeof msg.content !== "string") {
  throw new Error(`Unexpected content type: ${typeof msg.content}`);
}
return JSON.parse(msg.content);

How to Debug It

  1. Log the exact value leaving each node

    • Print the raw return value before it enters the next node.
    • If you see strings like "approved" where an object is expected, you found the issue.
  2. Check whether the failure happens before or after parsing

    • If it fails inside JSON.parse(), your model output is malformed.
    • If it fails during graph execution with something like InvalidUpdateError, your node returned the wrong shape.
  3. Validate against your schema outside LangGraph

    • Run the same payload through Zod manually.
    • Example:
      const parsed = Schema.safeParse(output);
      console.log(parsed.success ? parsed.data : parsed.error.flatten());
      
  4. Inspect prompts and model settings

    • Look for missing format instructions.
    • Check whether temperature is high enough to introduce formatting drift.
    • For production flows, set temperature to 0 for structured extraction.

Prevention

  • Use structured outputs instead of free-form text whenever possible.
  • Make every LangGraph node return a partial state object, never raw strings.
  • Validate LLM responses with Zod before merging them into graph state.
  • Keep prompts strict:
    • specify exact keys
    • specify allowed values
    • forbid markdown fences and extra commentary

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides