How to Fix 'output parsing error in production' in LangGraph (TypeScript)
If you’re seeing output parsing error in production in LangGraph, the graph is usually fine — your model output is not matching the shape your node or parser expects. In TypeScript, this often shows up when an LLM node returns plain text, but the next step expects structured JSON or a typed object.
This error typically appears after a deploy, when prompts drift, model versions change, or a schema gets stricter than the actual output.
The Most Common Cause
The #1 cause is this: you told LangGraph to expect structured output, but the model returned unstructured text or malformed JSON.
A common pattern is using JsonOutputParser, Zod validation, or a typed state update without forcing the model to emit valid JSON every time.
| Broken pattern | Fixed pattern |
|---|---|
| Model returns free-form text | Model is constrained to JSON |
| Parser fails on extra commentary | Output format is explicit |
| Node throws during state update | Node validates before returning |
// BROKEN: model can return anything
import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputParser } from "@langchain/core/output_parsers";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const parser = new JsonOutputParser();
const prompt = `
Return a JSON object with:
{ "risk": "low" | "medium" | "high" }
`;
const res = await llm.invoke(prompt);
const parsed = await parser.parse(res.content as string);
// Runtime error in production:
// OutputParserException: Failed to parse. Text: "The risk is high."
// FIXED: force structured output and validate it
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
const RiskSchema = z.object({
risk: z.enum(["low", "medium", "high"]),
});
const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const structured = llm.withStructuredOutput(RiskSchema);
const result = await structured.invoke([
{
role: "user",
content: "Classify this claim for fraud risk.",
},
]);
// result is typed and validated
console.log(result.risk);
If you are using LangGraph state updates directly, the same issue applies. A node like this will fail if it returns the wrong shape:
return { messages: [aiMessage] }; // wrong if your state expects parsed fields too
Make sure the node returns exactly what your graph state schema expects.
Other Possible Causes
1. Your prompt allows extra prose around JSON
The model may produce valid-looking JSON plus commentary. That breaks parsers immediately.
// Bad prompt
"Return JSON with customer_id and score."
// Better prompt
"Return ONLY valid JSON. No markdown. No explanation."
If you use function calling or structured output, prefer that over prompt-only formatting.
2. Your TypeScript state schema does not match runtime data
LangGraph may compile fine, but runtime data can still violate your assumptions.
type State = {
customerId: string;
score: number;
};
// Broken return value
return {
customerId: 123, // should be string
score: "high", // should be number
};
Fix it by validating at the boundary:
import { z } from "zod";
const StateSchema = z.object({
customerId: z.string(),
score: z.number(),
});
3. You are parsing the wrong field from the LLM response
In LangChain/LangGraph integrations, content may be a string, an array of blocks, or a tool call payload depending on the model/provider.
// Broken assumption
const text = response.content as string;
// Safer handling
const text =
typeof response.content === "string"
? response.content
: JSON.stringify(response.content);
If you’re using OpenAI tool calls, check response.additional_kwargs.tool_calls instead of parsing content.
4. Tool/function output is not being returned in the expected format
A tool node can emit data that looks correct to humans but not to your downstream parser.
// Broken tool output
return { result: "approved" };
// Expected by downstream code
return { result: { decision: "approved" } };
Make sure every tool and node agrees on one contract. In LangGraph, inconsistent contracts between nodes are a top source of OutputParserException and “parsing error” issues.
How to Debug It
- •
Log raw model output before parsing
- •Print
response.content,additional_kwargs, and any tool call payloads. - •You want the exact string that triggered
OutputParserException.
- •Print
- •
Check whether failure happens in the LLM node or in the next graph node
- •If the error appears inside
parser.parse(...), it’s an output formatting problem. - •If it appears during state merging, it’s usually a schema mismatch.
- •If the error appears inside
- •
Temporarily remove parsing and inspect raw values
- •Return raw content from the node.
- •If raw output looks like
"Here is your answer...", your prompt is too loose. - •If raw output is JSON but still fails, your parser/schema is stricter than you think.
- •
Validate with Zod at every boundary
- •Add schema checks for LLM outputs and graph state.
- •This isolates whether the bad value comes from the model, a tool, or a reducer.
Example debug wrapper:
try {
const result = await structured.invoke(messages);
console.log("structured result:", result);
} catch (err) {
console.error("LLM parse failed:", err);
console.error("raw messages:", messages);
}
Prevention
- •Use
withStructuredOutput()or tool calling instead of free-form “return JSON” prompts. - •Set
temperature: 0for nodes that must produce deterministic structured data. - •Validate every external boundary with Zod:
- •LLM response
- •tool output
- •graph state updates
If you’re building production LangGraph workflows in TypeScript, treat parsing as an interface contract problem, not an LLM problem. Once you lock down schemas and stop relying on prompt obedience alone, this class of error drops fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit