How to Fix 'JSON parsing error in production' in LangGraph (TypeScript)
A JSON parsing error in production in LangGraph usually means one of your nodes, tools, or model outputs returned text that was supposed to be structured JSON, but wasn’t valid JSON when LangGraph tried to read it.
In TypeScript, this usually shows up when you use JsonOutputParser, structured tool calls, checkpointing state, or any node that expects a serializable object and gets back markdown, code fences, trailing commas, or an empty string instead.
The Most Common Cause
The #1 cause is this: you told the model to return JSON, but you still let it return free-form text. In production, the model eventually emits something like:
{
"name": "Alice",
"age": 32,
}
That trailing comma is enough to blow up parsing.
Here’s the broken pattern versus the fixed one.
| Broken | Fixed |
|---|---|
| Model returns “JSON-ish” text | Force structured output or strict parsing |
Parses raw AIMessage.content | Parses validated object output |
Fails at runtime with Unexpected token | Returns typed data consistently |
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputParser } from "@langchain/core/output_parsers";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const parser = new JsonOutputParser();
const prompt = `
Return JSON with keys: name, age.
`;
const result = await llm.invoke(prompt);
const parsed = await parser.parse(result.content as string);
// Runtime error:
// SyntaxError: Unexpected token ` in JSON at position 0
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const schema = z.object({
name: z.string(),
age: z.number(),
});
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const structured = llm.withStructuredOutput(schema);
const result = await structured.invoke(
"Return a person with name Alice and age 32."
);
// result is typed and validated
console.log(result.name, result.age);
If you’re using LangGraph state updates, the same rule applies. Don’t pass raw assistant text into state if the state field expects an object.
// BAD: storing raw string where state expects JSON-like object
return {
profile: aiMessage.content,
};
// GOOD: store parsed object
return {
profile: parsedProfile,
};
Other Possible Causes
1) Markdown code fences around JSON
Models often wrap output in triple backticks. That is not valid JSON.
const bad = "```json\n{\"ok\": true}\n```";
JSON.parse(bad); // SyntaxError
Fix by stripping fences before parsing, or better, avoid free-form generation entirely.
function stripFences(text: string) {
return text.replace(/^```json\s*/i, "").replace(/```$/, "").trim();
}
2) Empty or partial output from a node
In LangGraph, a node can return undefined, an empty string, or a truncated payload if a downstream step times out or your code exits early.
export async function parseNode(state: State) {
const raw = state.rawResponse; // could be ""
return { parsed: JSON.parse(raw) };
}
Guard before parsing:
if (!state.rawResponse?.trim()) {
throw new Error("parseNode received empty rawResponse");
}
3) Tool output is not serializable
LangGraph state and checkpointing expect serializable data. If you return a class instance, Map, Set, circular object, or Date without normalization, serialization can break later.
return {
toolResult: new Map([["a", 1]]), // bad for JSON persistence
};
Normalize first:
return {
toolResult: Object.fromEntries(new Map([["a", 1]])),
};
4) Schema drift between node versions
This happens when your graph writes one shape of data and another node reads a different shape after a deploy.
// old writer
return { customerId: "123" };
// new reader expects:
state.customer.id;
Use versioned state fields during rollout:
type StateV2 = {
customerId?: string;
customer?: { id: string };
};
How to Debug It
- •
Log the exact raw payload before parsing
- •Print the exact string being passed into
JSON.parse(). - •Look for backticks, commentary, prefixes like
Here is the JSON, or trailing commas.
- •Print the exact string being passed into
- •
Find the failing boundary
- •Check whether the failure happens in:
- •LLM output parsing (
JsonOutputParser) - •tool execution result handling
- •graph state update / checkpoint serialization
- •LLM output parsing (
- •The stack trace usually points to either
SyntaxError,ZodError, or a LangChain/LangGraph serialization path.
- •Check whether the failure happens in:
- •
Validate against schema before writing to state
- •If you use Zod, validate immediately after model output.
- •Don’t let invalid data flow into later nodes.
const parsed = schema.safeParse(candidate);
if (!parsed.success) {
console.error(parsed.error.flatten());
throw new Error("Invalid structured output");
}
- •Reproduce with production-like inputs
- •The bug often only appears with long prompts, multilingual responses, or edge-case user input.
- •Test with the same model config used in prod:
- •temperature
- •max tokens
- •streaming on/off
- •same prompt template
Prevention
- •Use
withStructuredOutput()or strict schema validation instead of asking for “JSON” in plain English. - •Never persist raw model text into graph state if another node expects parsed objects.
- •Add a small guardrail utility for every parse boundary:
- •trim whitespace
- •strip code fences if needed
- •validate with Zod before storing
If you’re seeing SyntaxError: Unexpected token inside a LangGraph run, assume the problem is not “LangGraph can’t parse JSON.” The real issue is almost always that some upstream step produced non-JSON content and your graph trusted it too early.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit