How to Fix 'output parsing error in production' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-error-in-productionlangchaintypescript

When LangChain throws OutputParserException: Could not parse LLM output or a production log says output parsing error, it usually means the model returned text that does not match the structure your parser expects. In TypeScript, this shows up most often when you use StructuredOutputParser, PydanticOutputParser, or a JSON-based chain and the model adds extra prose, malformed JSON, or missing fields.

This is rarely a LangChain bug. It’s usually a prompt, schema, or model-output mismatch.

The Most Common Cause

The #1 cause is asking the model for structured output, then not forcing it hard enough to return only that structure.

A common broken pattern is: you tell the model to “return JSON,” but your prompt still leaves room for explanation. The parser then receives something like Sure, here’s the JSON: and fails.

Broken patternFixed pattern
Loose prompt + parser expects strict JSONStrict format instructions + explicit “JSON only” constraint
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "langchain/output_parsers";
import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
});

const parser = StructuredOutputParser.fromZodSchema(schema);

const prompt = `
Extract the user's details and return JSON.
User: My name is Alice and I'm 32.
`;

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const result = await llm.invoke(prompt);
const parsed = await parser.parse(result.content as string);
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "langchain/output_parsers";
import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
});

const parser = StructuredOutputParser.fromZodSchema(schema);

const formatInstructions = parser.getFormatInstructions();

const prompt = `
You are an extraction engine.

${formatInstructions}

Return only valid JSON that matches the schema.
No markdown. No explanation.

User: My name is Alice and I'm 32.
`;

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const result = await llm.invoke(prompt);
const parsed = await parser.parse(result.content as string);

The important fix is not just “use JSON.” It’s to inject the parser’s own format instructions into the prompt and remove any ambiguity. If you skip that, the model will often prepend text, wrap JSON in markdown fences, or omit required keys.

Other Possible Causes

1) Model returns markdown fences

Some models love wrapping structured output in triple backticks. That breaks strict parsers unless you strip it first.

// Example output that breaks parsing:
// ```json
// {"name":"Alice","age":32}
// ```

const raw = result.content as string;

If your chain does not clean this up, JSON.parse() or LangChain’s parser can fail with:

  • OutputParserException
  • Unexpected token \ in JSON at position 0`

Fix by either tightening the prompt or adding a cleanup step before parsing.

2) Schema mismatch

Your parser expects one type, but the model returns another.

// Parser expects age as number
const schema = z.object({
  name: z.string(),
  age: z.number(),
});

// Model returns:
// {"name":"Alice","age":"32"}

That will fail validation even though it looks close. In production, this happens a lot when fields come back as strings instead of numbers, booleans as "true", or arrays as comma-separated text.

3) Wrong message content passed to parser

With chat models, people sometimes pass the full AIMessage object into a string parser incorrectly.

// BROKEN
const aiMessage = await llm.invoke(prompt);
await parser.parse(aiMessage); // wrong type

// FIXED
await parser.parse(aiMessage.content as string);

LangChain parsers generally expect plain text content. If you feed them objects, tool-call payloads, or metadata wrappers, parsing can fail before validation even starts.

4) Tool calling vs text parsing confusion

If you’re using OpenAI tool calling or LangChain agent tools, don’t also parse the assistant text as if it were raw JSON unless that’s really what you configured.

// If you're using tools:
const llmWithTools = llm.bindTools([myTool]);

// Don't assume assistant content contains your final JSON payload.
// Inspect tool calls instead of parsing plain text.

This is a common source of production errors because tool-call responses are not meant to be consumed by StructuredOutputParser directly.

How to Debug It

  1. Log the raw model output before parsing

    • Print result.content exactly as received.
    • Look for markdown fences, extra prose, truncated JSON, or wrong field names.
  2. Compare raw output against your schema

    • Check whether every required field exists.
    • Check types carefully: "42" vs 42, "false" vs false, arrays vs strings.
  3. Confirm which parser you are using

    • StructuredOutputParser
    • JsonOutputParser
    • Zod-based validation through LangChain helpers
    • Tool-calling output handling

    Each has different expectations. Don’t debug them like they’re interchangeable.

  4. Temporarily simplify the prompt

    • Remove all business logic.
    • Ask for one tiny object:
      Return only:
      {"name":"string"}
      
    • If that works, reintroduce fields until it breaks again.

Prevention

  • Use the parser’s format instructions in every structured-output prompt.
  • Keep temperature: 0 for extraction chains in production.
  • Add a post-processing guard:
    • strip code fences
    • reject non-JSON early
    • validate with Zod before downstream use

If this error appears intermittently in production, treat it as an output-contract problem, not a runtime fluke. The fix is usually to make the contract stricter than whatever you wrote in plain English.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides