How to Fix 'output parsing error' in LangGraph (TypeScript)
When LangGraph throws an output parsing error, it usually means a node returned data that does not match the schema or shape the graph expects. In TypeScript, this most often shows up when a model response is being parsed into a structured output, but the LLM returns extra text, invalid JSON, or the wrong field names.
You’ll typically hit this after adding zod-based validation, StructuredOutputParser, or a reducer expecting a specific state shape. The stack trace often points at OutputParserException, ZodError, or a failed RunnableSequence inside a graph node.
The Most Common Cause
The #1 cause is this: your LLM node returns free-form text, but your graph expects structured state.
In LangGraph, nodes should return plain state updates, not raw assistant messages unless your state is designed for that. If you ask the model for JSON and then try to parse it manually, one stray sentence is enough to trigger:
- •
OutputParserException: Failed to parse output - •
ZodError: Expected object, received string - •
InvalidUpdateError: Expected object with keys ...
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Returns unstructured text from the model | Returns a typed object that matches graph state |
| Parses model output manually | Uses a structured output schema |
| Lets the LLM invent formatting | Forces schema-constrained output |
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const State = Annotation.Root({
summary: Annotation<string>(),
});
const SummarySchema = z.object({
summary: z.string(),
});
// ❌ Broken
async function summarizeNode() {
const response = await llm.invoke([
["system", "Return JSON with a summary field."],
["user", "Summarize this incident report..."],
]);
// This often fails because response.content may include markdown or extra text
const parsed = JSON.parse(response.content as string);
return { summary: parsed.summary };
}
// ✅ Fixed
async function summarizeNodeFixed() {
const structuredLlm = llm.withStructuredOutput(SummarySchema);
const result = await structuredLlm.invoke([
["system", "Summarize this incident report."],
["user", "Summarize this incident report..."],
]);
return { summary: result.summary };
}
The important part is that the fixed version makes the model produce data in the shape your code expects. That removes the fragile JSON.parse() step and avoids parser failures caused by formatting noise.
Other Possible Causes
1) Your node returns the wrong state shape
LangGraph expects node outputs to merge cleanly into state. If your state says { messages: Message[] } but you return { message: ... }, you’ll get update or parsing errors.
// ❌ Broken
return { message: "done" };
// ✅ Fixed
return { messages: [{ role: "assistant", content: "done" }] };
If you are using reducers or annotations, make sure the returned keys match exactly.
2) Zod schema is stricter than your prompt
A common failure looks like this:
- •Schema expects
priorityto be"low" | "medium" | "high" - •Model returns
"urgent"
That turns into a ZodError during parsing.
const TicketSchema = z.object({
priority: z.enum(["low", "medium", "high"]),
});
// ❌ Model may return "urgent"
const badPrompt = "Return priority as urgent if serious.";
// ✅ Align prompt with schema
const goodPrompt = "Return priority as one of low, medium, high only.";
If you use enums, optional fields, or nested objects, keep the prompt aligned with exact schema constraints.
3) You are parsing assistant text instead of tool output
If your node calls tools and then tries to parse natural language from the assistant message, you can get malformed output. Tool calls are structured; plain chat content is not.
// ❌ Broken
const msg = await llm.invoke(prompt);
const data = JSON.parse(msg.content as string);
// ✅ Fixed
const toolBound = llm.bindTools([myTool]);
const msg = await toolBound.invoke(prompt);
// Inspect msg.tool_calls instead of parsing content blindly
This matters in agent workflows where tool calls are mixed with final responses.
4) Your graph reducer cannot merge arrays/objects correctly
If two nodes update the same key and your reducer is missing or wrong, LangGraph may fail while applying updates.
const State = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
});
// ❌ Broken if reducer is missing and multiple nodes write messages
Without the right reducer, concurrent updates can produce inconsistent state and downstream parser failures.
How to Debug It
- •
Check the exact exception type
- •If you see
OutputParserException, focus on model output format. - •If you see
ZodError, focus on schema mismatch. - •If you see
InvalidUpdateError, focus on graph state shape.
- •If you see
- •
Log raw node output before parsing
- •Print
response.content,tool_calls, and the final object returned by each node. - •Compare that against your expected TypeScript type and Zod schema.
- •Print
- •
Remove parsing layers one by one
- •Temporarily skip
JSON.parse(). - •Temporarily remove
.withStructuredOutput(). - •Temporarily return a hardcoded valid object from the node.
- •If hardcoded output works, your issue is upstream in prompting or model formatting.
- •Temporarily skip
- •
Validate every node return
- •Add runtime checks before returning:
const parsed = MySchema.safeParse(result); if (!parsed.success) throw new Error(parsed.error.message); return parsed.data; - •This tells you whether the failure happens inside LangGraph or before it reaches LangGraph.
- •Add runtime checks before returning:
Prevention
- •Use
.withStructuredOutput()or tool calling instead of asking for “JSON” in plain English. - •Keep your Zod schemas and prompts in sync; if the schema changes, update examples and instructions immediately.
- •Return typed state objects from every node and define reducers for any shared array/object fields.
- •Add integration tests that run one full graph path with mocked LLM responses that include:
- •valid JSON
- •malformed JSON
- •extra prose around JSON
If you’re seeing output parsing error in LangGraph TypeScript, don’t start by blaming LangGraph. In most cases, the problem is one of three things: unstructured model output, schema mismatch, or an invalid graph update shape. Fix those first and the error usually disappears fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit