How to Fix 'output parsing error during development' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-error-during-developmentlangchaintypescript

What this error means

output parsing error during development in LangChain usually means the model returned text that does not match the schema your chain is expecting. In TypeScript, this shows up most often when you use StructuredOutputParser, JsonOutputParser, PydanticOutputParser, or an agent with a strict tool/output format.

You’ll typically hit it when the LLM adds extra prose, returns invalid JSON, misses a required field, or ignores the format instructions entirely.

The Most Common Cause

The #1 cause is simple: your prompt does not strongly enforce the output format, so the model returns human-friendly text instead of machine-readable structure.

Here’s the broken pattern I see most often with StructuredOutputParser.

BrokenFixed
Parser created, but format instructions never injected into the prompt.Parser instructions are embedded into the prompt and the model is forced toward valid output.
// ❌ Broken
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { z } from "zod";

const parser = StructuredOutputParser.fromZodSchema(
  z.object({
    name: z.string(),
    age: z.number(),
  })
);

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const prompt = `Extract name and age from this text:
John is 34 years old.`;

const res = await llm.invoke(prompt);
const parsed = await parser.parse(res.content as string);
// ✅ Fixed
import { ChatOpenAI } from "@langchain/openai";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const parser = StructuredOutputParser.fromZodSchema(
  z.object({
    name: z.string(),
    age: z.number(),
  })
);

const formatInstructions = parser.getFormatInstructions();

const prompt = PromptTemplate.fromTemplate(`
Extract name and age from this text.

{format_instructions}

Text:
{input}
`);

const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });

const formattedPrompt = await prompt.format({
  input: "John is 34 years old.",
  format_instructions: formatInstructions,
});

const res = await llm.invoke(formattedPrompt);
const parsed = await parser.parse(res.content as string);

The key difference is that the model now sees explicit format instructions. Without them, you’re asking for structured data but not actually constraining the response.

Other Possible Causes

1. The model adds extra text around JSON

This is common when you ask for JSON but don’t use a parser that tolerates non-JSON wrapper text.

// ❌ Broken
const prompt = `Return JSON only:
Name: Alice, Role: Analyst`;


// Model responds:
// Sure — here is the JSON:
// {"name":"Alice","role":"Analyst"}

Fix it by using stronger instructions or a parser that extracts fenced JSON only.

// ✅ Better
const prompt = `
Return ONLY valid JSON with keys name and role.
No markdown, no explanation.

Text:
Name: Alice, Role: Analyst
`;

2. Schema mismatch between what you asked for and what you validate

If your Zod schema says age: number, but the model returns "34" as a string, parsing fails.

const schema = z.object({
  age: z.number(), // strict
});

// Model returns:
// { "age": "34" }

Fix by either tightening the prompt or accepting coercion if that fits your use case.

const schema = z.object({
  age: z.coerce.number(),
});

3. You are parsing an agent response instead of tool output

With agents, AgentExecutor responses are not always directly parseable. If you attach an output parser to the wrong layer, you’ll get errors like:

  • Failed to parse output
  • Could not parse LLM output
  • OutputParserException

Use parsers on deterministic steps, not on final agent chatter unless you fully control the agent response shape.

// ❌ Don't assume agent final text is strict JSON
const result = await agentExecutor.invoke({ input: "Summarize this case" });
await parser.parse(result.output);

4. Streaming or truncation cut off valid structure

If generation stops early because of token limits, your JSON will be incomplete and parsing will fail.

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  maxTokens: 80, // too low for long structured output
});

Raise token limits or reduce output size. If you see half-written JSON like:

{"name":"Alice","role":

that’s truncation, not a parser bug.

How to Debug It

  1. Log raw model output before parsing

    • Don’t inspect only the exception.
    • Print res.content and confirm whether it is valid JSON or just natural language.
  2. Check whether format instructions are actually in the prompt

    • If you use StructuredOutputParser, verify parser.getFormatInstructions() is included.
    • In many cases, the bug is just a missing placeholder in PromptTemplate.
  3. Compare schema vs returned fields

    • Look for type mismatches:
      • string vs number
      • missing required keys
      • nested object shape differences
    • If needed, temporarily relax validation with Zod coercion to isolate formatting issues.
  4. Reduce complexity until it parses

    • Remove agents, tools, memory, and retries.
    • Test with a single prompt + single parser.
    • Once it works, reintroduce components one at a time.

Prevention

  • Use explicit structured output prompts every time you expect machine-readable data.
  • Prefer Zod schemas with coercion only where it makes sense; don’t let everything become any.
  • Keep temperature at 0 for extraction flows where formatting matters more than creativity.
  • Treat agents as orchestration layers, not as guaranteed structured-output producers unless you’ve built strict constraints around them.

If you want stable parsing in production, design for failure up front:

  • validate raw output,
  • retry once with stricter instructions,
  • and fall back to a repair step only when necessary.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides