How to Fix 'JSON parsing error in production' in LangChain (TypeScript)
When LangChain says JSON parsing error in production, it usually means one thing: some part of your chain expected valid JSON, but the model returned text that wasn’t strict JSON. In TypeScript, this often shows up when using StructuredOutputParser, JsonOutputParser, function/tool calling, or a custom parser behind an agent.
This is usually not a LangChain bug. It’s a contract mismatch between your prompt, model output, and parser.
The Most Common Cause
The #1 cause is asking the model for JSON without enforcing a strict schema or output format, then parsing the response as if it were guaranteed to be valid JSON.
Here’s the broken pattern:
import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const parser = new JsonOutputParser();
const prompt = PromptTemplate.fromTemplate(`
Return user data as JSON:
Name: {name}
Age: {age}
`);
const chain = prompt.pipe(model).pipe(parser);
const result = await chain.invoke({
name: "Amina",
age: 32,
});
If the model returns:
Sure — here is the JSON:
{"name":"Amina","age":32}
you’ll get errors like:
- •
SyntaxError: Unexpected token S in JSON at position 0 - •
Error: Failed to parse. Text returned was not valid JSON - •
OutputParserException: Failed to parse JSON
The fixed pattern is to use a structured parser and inject formatting instructions into the prompt:
import { ChatOpenAI } from "@langchain/openai";
import {
StructuredOutputParser,
} from "@langchain/core/output_parsers";
import { z } from "zod";
import { PromptTemplate } from "@langchain/core/prompts";
const schema = z.object({
name: z.string(),
age: z.number(),
});
const parser = StructuredOutputParser.fromZodSchema(schema);
const formatInstructions = parser.getFormatInstructions();
const prompt = PromptTemplate.fromTemplate(`
You must follow these instructions exactly:
{format_instructions}
Name: {name}
Age: {age}
`);
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const chain = prompt.pipe(model).pipe(parser);
const result = await chain.invoke({
name: "Amina",
age: 32,
format_instructions: formatInstructions,
});
The key difference:
- •The broken version assumes JSON will happen naturally
- •The fixed version tells the model exactly what shape to return
Other Possible Causes
| Cause | What it looks like | Fix |
|---|---|---|
| Model adds prose around JSON | Here you go: before {...} | Use strict format instructions or tool/function calling |
| Temperature too high | Random formatting, trailing commas, extra text | Set temperature: 0 |
| Wrong parser for the output type | Parsing markdown/text with JsonOutputParser | Match parser to actual output |
| Streaming partial output into JSON parser | Parser sees incomplete JSON chunks | Parse only after full completion |
1) Model adds extra text before or after JSON
This is common with prompts like “return JSON only” that are too weak.
// Broken
const prompt = `Give me a JSON object with fields name and age.`;
// Fixed
const prompt = `
Return ONLY valid JSON.
No markdown.
No explanation.
No backticks.
`;
If you’re using OpenAI tool calling, prefer that over raw text parsing. It gives you much cleaner structure.
2) Temperature is not zero
At higher temperatures, models get creative with punctuation and formatting.
// Broken
new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.7 });
// Fixed
new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
For production parsers, keep temperature at zero unless you have a strong reason not to.
3) Parser does not match the output contract
If you use JsonOutputParser but your prompt returns free-form text, it will fail every time.
// Broken
import { JsonOutputParser } from "@langchain/core/output_parsers";
// This expects raw JSON only.
const parser = new JsonOutputParser();
If you actually want typed validation, use Zod-backed structured parsing:
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
const parser = StructuredOutputParser.fromZodSchema(
z.object({
decision: z.enum(["approve", "reject"]),
reason: z.string(),
})
);
4) Streaming response parsed too early
If you pipe partial chunks into a parser, you’ll see invalid JSON errors even when the final answer would have been valid.
// Broken idea:
// parse each streamed chunk as if it were complete JSON
// Fixed:
// collect full content first, then parse once at the end
let fullText = "";
for await (const chunk of stream) {
fullText += chunk.content ?? "";
}
const parsed = JSON.parse(fullText);
Streaming is fine. Parsing incomplete streamed content is not.
How to Debug It
- •
Log the raw LLM output before parsing
- •Don’t inspect only the exception.
- •Print the exact string returned by the model.
- •If you see prose, markdown fences, or trailing commas, that’s your issue.
- •
Check which class is throwing
- •Common offenders:
- •
JsonOutputParser - •
StructuredOutputParser - •
OutputParserException
- •
- •If the stack trace points into
parse()orparseResult(), this is almost always an output-format problem.
- •Common offenders:
- •
Run with temperature set to zero
- •If the error disappears, your prompt was too loose.
- •Keep config deterministic while debugging:
new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
- •Test the exact response against your schema
- •Copy the raw text into a local script and run
JSON.parse(). - •If that fails locally, LangChain isn’t the root cause.
- •Then validate against Zod if you’re using structured outputs.
- •Copy the raw text into a local script and run
Prevention
- •
Use structured outputs for anything production-critical.
- •Prefer
StructuredOutputParser.fromZodSchema()or tool/function calling over plain-text “JSON please” prompts.
- •Prefer
- •
Keep prompts strict and explicit.
Return ONLY valid JSON matching this schema.
No markdown.
No commentary.
No code fences.
- •Add a pre-prod test that asserts raw model output can be parsed.
- •Feed known inputs through your chain in CI.
- •Fail fast if parsing breaks before deployment.
If you’re seeing JSON parsing error in production in LangChain TypeScript, start with the raw output. In most cases, the fix is not in the parser itself — it’s in how tightly you constrained the model to produce valid structured data.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit