How to Fix 'output parsing error' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-errorlangchaintypescript

What the error means

output parsing error in LangChain usually means the model returned text that did not match the structure your parser expected. In TypeScript, this shows up a lot when using StructuredOutputParser, JsonOutputParser, or agents that expect a very specific format.

The usual trigger is simple: your prompt asked for JSON, but the model returned prose, markdown, extra text, or malformed JSON.

The Most Common Cause

The #1 cause is a prompt/parser mismatch. You tell LangChain to parse structured output, but you do not give the model strict formatting instructions, or you let it wrap the answer in commentary.

Here is the broken pattern:

BrokenFixed
prompt says “return JSON” in plain EnglishformatInstructions are injected into the prompt
parser expects strict schemamodel is told exactly what to output
output contains extra textoutput is parseable JSON only
// ❌ Broken: parser expects structured JSON, prompt is too loose
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
import { PromptTemplate } from "@langchain/core/prompts";

const schema = z.object({
  name: z.string(),
  age: z.number(),
});

const parser = StructuredOutputParser.fromZodSchema(schema);

const prompt = PromptTemplate.fromTemplate(`
Extract the person details from this text:
{text}

Return JSON.
`);

const chain = prompt.pipe(new ChatOpenAI({ model: "gpt-4o-mini" })).pipe(parser);

await chain.invoke({
  text: "John Doe is 32 years old.",
});
// ✅ Fixed: format instructions are included explicitly
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";
import { PromptTemplate } from "@langchain/core/prompts";

const schema = z.object({
  name: z.string(),
  age: z.number(),
});

const parser = StructuredOutputParser.fromZodSchema(schema);

const prompt = PromptTemplate.fromTemplate(`
Extract the person details from this text.

{format_instructions}

Text:
{text}
`);

const chain = prompt.pipe(new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 })).pipe(parser);

await chain.invoke({
  text: "John Doe is 32 years old.",
  format_instructions: parser.getFormatInstructions(),
});

If you see errors like these, this is usually the issue:

  • OutputParserException: Failed to parse. Text does not match expected format
  • Error parsing JSON
  • Could not parse LLM output
  • Failed to parse structured output

The fix is not “retry harder”. The fix is to make the contract between prompt and parser explicit.

Other Possible Causes

1. The model returned markdown fences

A lot of models wrap JSON in triple backticks even when you ask them not to.

// Model returns:
// ```json
// {"name":"John","age":32}
// ```

That breaks parsers expecting raw JSON. If needed, strip fences before parsing, or better, tighten your instructions and use a lower temperature.

2. Your schema and prompt disagree

If your Zod schema says age is a number but the model returns "32" as a string, parsing fails.

const schema = z.object({
  name: z.string(),
  age: z.number(), // model returns "32"
});

Fix it by either adjusting the schema or making the prompt enforce numeric output.

3. You are using an agent that can emit tool chatter

Agents often produce intermediate reasoning or tool-call noise that breaks downstream parsers.

// Example pattern that can fail if final output isn't clean
const result = await agentExecutor.invoke({ input: "..." });

If your agent is meant to return structured data, use an output parser on the final response only, or switch to function/tool calling where supported.

4. Streaming partial output into a parser

Parsing partial tokens while streaming can trigger parse failures before completion.

// ❌ Don't parse incomplete chunks as if they were final output
for await (const chunk of stream) {
  parser.parse(chunk); // likely to fail mid-stream
}

Buffer first, then parse once the full response arrives.

How to Debug It

  1. Log the raw LLM output before parsing
    • Do not inspect only the parsed object.
    • Print exactly what came back from the model.
const raw = await llm.invoke(promptValue);
console.log(raw.content);
  1. Check whether the failure is format-related or schema-related

    • If raw output looks like valid JSON but still fails, compare types against your Zod schema.
    • If raw output includes extra prose or code fences, it is a formatting issue.
  2. Reduce variables

    • Set temperature: 0.
    • Remove agent wrappers.
    • Use a tiny schema with two fields.
    • Re-run until you isolate whether it’s prompt, model behavior, or schema mismatch.
  3. Test parsing directly

    • Take the exact raw string and run it through your parser in isolation.
    • This tells you whether LangChain or your upstream chain step introduced the problem.
try {
  const parsed = await parser.parse(raw.content as string);
  console.log(parsed);
} catch (e) {
  console.error("Parse failed:", e);
}

Prevention

  • Use explicit format instructions every time you expect structured output.
  • Keep temperature at 0 for extraction tasks.
  • Prefer schemas that match real model behavior; do not over-constrain fields unless you need to.
  • If you need guaranteed structure, use tool/function calling instead of free-form text parsing.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides