How to Fix 'JSON parsing error during development' in LangChain (TypeScript)
When you see JSON parsing error during development in LangChain TypeScript, it usually means the model returned text that LangChain tried to parse as structured JSON, but the output was not valid JSON. This shows up most often when using StructuredOutputParser, JsonOutputParser, tool calling wrappers, or any chain that expects a strict schema.
In practice, this is rarely a “LangChain bug”. It’s usually prompt formatting, model behavior, or a mismatch between what your code expects and what the model actually returns.
The Most Common Cause
The #1 cause is asking the model for JSON without forcing a strict output contract, then parsing the response as if it were guaranteed to be valid JSON.
This pattern breaks because the model may return:
- •Markdown fences like ```json
- •Extra commentary before or after the object
- •Trailing commas
- •Single quotes instead of double quotes
Here’s the broken pattern and the fixed pattern side by side.
| Broken | Fixed |
|---|---|
ts\nimport { ChatOpenAI } from \"@langchain/openai\";\nimport { StructuredOutputParser } from \"@langchain/core/output_parsers\";\n\nconst parser = StructuredOutputParser.fromNamesAndDescriptions({\n name: \"person name\",\n age: \"person age\",\n});\n\nconst prompt = `Return JSON for this person: John is 32 years old.`;\n\nconst llm = new ChatOpenAI({ model: \"gpt-4o-mini\" });\nconst result = await llm.invoke(prompt);\n\n// Fails if result.content contains extra text or invalid JSON\nconst parsed = await parser.parse(result.content as string);\n | ts\nimport { ChatPromptTemplate } from \"@langchain/core/prompts\";\nimport { ChatOpenAI } from \"@langchain/openai\";\nimport { StructuredOutputParser } from \"@langchain/core/output_parsers\";\n\nconst parser = StructuredOutputParser.fromNamesAndDescriptions({\n name: \"person name\",\n age: \"person age\",\n});\n\nconst formatInstructions = parser.getFormatInstructions();\n\nconst prompt = ChatPromptTemplate.fromMessages([\n [\"system\", `You must return valid JSON only. ${formatInstructions}`],\n [\"user\", \"Return data for: John is 32 years old.\"]\n]);\n\nconst llm = new ChatOpenAI({ model: \"gpt-4o-mini\", temperature: 0 });\nconst chain = prompt.pipe(llm).pipe(parser);\n\nconst parsed = await chain.invoke({});\n |
Why this works:
- •The parser instructions are injected into the prompt
- •The system message tells the model to return JSON only
- •
temperature: 0reduces random formatting drift - •The chain pipes directly into the parser instead of manually handling raw text
If you’re using JsonOutputParser, the same rule applies. Don’t just “ask for JSON”; make JSON part of the contract.
Other Possible Causes
1. You are parsing markdown-fenced JSON
A lot of models return this:
{
"name": "John",
"age": 32
}
That looks fine to humans, but some parsers choke if they receive the fences as part of the string.
Fix by stripping fences before parsing, or better, prevent them in the prompt.
const raw = result.content as string;
const cleaned = raw.replace(/```json|```/g, "").trim();
const parsed = JSON.parse(cleaned);
2. Your schema and prompt do not match
If your schema says age must be a number, but your prompt encourages free-form prose, you’ll get invalid output or type mismatches.
// Schema expects:
{
name: "string",
age: "number"
}
// But prompt says:
"Describe John naturally and include his age."
Be explicit:
"Return ONLY this shape:
{ \"name\": string, \"age\": number }"
3. You are using a model that does not reliably follow structured output
Some models are better at tool calling and schema adherence than others. If you’re using a smaller model or an older deployment, structured output failures become common.
Typical symptom:
- •
OutputParserException - •
SyntaxError: Unexpected token ... in JSON at position ...
Try switching to a stronger model or use provider-native structured output if available.
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
4. You are reading the wrong field from the LangChain response
With chat models, invoke() may return an AI message object, not plain text. If you cast blindly and parse the wrong property, you’ll get garbage input.
// Wrong
const text = result as unknown as string;
// Right
const text =
typeof result.content === "string"
? result.content
: JSON.stringify(result.content);
If you’re working with tool calls or message arrays, inspect the actual structure before parsing.
How to Debug It
- •
Log the raw model output before parsing
- •Print
result.content, not just parsed data. - •Look for markdown fences, leading commentary, or truncated JSON.
- •Print
- •
Check whether LangChain is throwing an
OutputParserException- •That usually means formatting mismatch.
- •A plain
SyntaxErroroften means raw invalid JSON reachedJSON.parse().
- •
Verify your prompt includes format instructions
- •If you use
StructuredOutputParser, callparser.getFormatInstructions(). - •If those instructions are missing from the system/user prompt, expect failures.
- •If you use
- •
Reduce variables
- •Set
temperature: 0 - •Swap to a stronger model
- •Remove extra few-shot examples temporarily
- •Test with one minimal input until parsing succeeds
- •Set
A good debugging loop looks like this:
try {
const result = await chain.invoke({});
console.log("RAW:", result);
} catch (err) {
console.error("PARSE ERROR:", err);
}
Prevention
- •
Use a parser-driven prompt flow:
- •Build prompts with
StructuredOutputParserorJsonOutputParser - •Pipe LLM output directly into the parser
- •Build prompts with
- •
Keep prompts strict:
- •Say “return valid JSON only”
- •Include exact field names and types
- •Avoid asking for explanations alongside structured output
- •
Prefer deterministic settings in production:
- •Use
temperature: 0 - •Validate outputs before downstream use
- •Add retries with repair logic if your workflow depends on strict schemas
- •Use
If you’re still seeing JSON parsing error during development, inspect one real raw response first. In most LangChain TypeScript setups, that single log line will tell you exactly whether you have a prompt problem, a schema problem, or a model-output problem.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit