How to Fix 'output parsing error during development' in LangGraph (TypeScript)
When you see output parsing error during development in LangGraph TypeScript, it usually means one of your nodes returned data that does not match the schema or shape the graph expects. In practice, this shows up when you wire an LLM node into a typed graph and the model returns free-form text, malformed JSON, or a partial object.
The error often appears during local development because your test inputs are smaller, your prompts are looser, or your node output type is stricter than you think. The fix is usually not in LangGraph itself — it’s in how you define state, parse model output, and return values from nodes.
The Most Common Cause
The #1 cause is returning raw LLM text where LangGraph expects a structured object.
This happens a lot when you use ChatOpenAI or another model directly inside a node and then return response.content instead of a parsed object that matches your state schema. If your graph state says answer: string, but the node returns { answer: "..." } inconsistently or returns plain text when a downstream parser expects JSON, LangGraph throws parsing-related errors.
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
| Returns raw text from the model | Parses and returns a typed object |
| No schema validation | Schema matches graph state |
| Downstream node assumes structured data | Downstream node receives exact shape |
// ❌ Broken
import { StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const graph = new StateGraph({
channels: {
question: null,
answer: null,
},
});
graph.addNode("generate", async (state) => {
const response = await llm.invoke([
{ role: "system", content: "Answer the question." },
{ role: "user", content: state.question },
]);
// Returns raw string content, not a validated object
return { answer: response.content };
});
// ✅ Fixed
import { z } from "zod";
import { StateGraph } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const OutputSchema = z.object({
answer: z.string(),
});
type GraphState = {
question: string;
answer?: string;
};
const graph = new StateGraph<GraphState>({
channels: {
question: null,
answer: null,
},
});
graph.addNode("generate", async (state) => {
const response = await llm.invoke([
{ role: "system", content: "Answer the question in JSON with key answer." },
{ role: "user", content: state.question },
]);
const parsed = OutputSchema.parse(JSON.parse(response.content as string));
return { answer: parsed.answer };
});
If you want fewer parsing failures, do not rely on “the model will probably format this correctly.” In LangGraph, “probably” becomes runtime pain fast.
Other Possible Causes
1. Your state type and returned object do not match
If your node returns { result: "x" } but the graph expects { answer: "x" }, you will get downstream failures that look like parsing issues.
// Wrong
return { result: "42" };
// Right
return { answer: "42" };
If you are using Zod or another schema layer, keep the keys identical across:
- •prompt instructions
- •parser output
- •graph state fields
2. The LLM returns invalid JSON
This is common when you ask for JSON but don’t enforce it. A trailing comma or markdown fence is enough to break parsing.
// Problematic prompt
"You must return JSON."
// Better prompt
"You must return valid JSON only. No markdown fences. No explanation."
For OpenAI-style models, prefer structured output helpers where available:
const llmWithSchema = llm.withStructuredOutput(
z.object({
answer: z.string(),
confidence: z.number(),
})
);
3. You are mixing message formats incorrectly
LangGraph nodes often pass messages around as arrays of BaseMessage. If one node returns a plain string while another expects messages, parsing fails later.
// Wrong
return "Here is the final answer";
// Right
return {
messages: [{ role: "assistant", content: "Here is the final answer" }],
};
If your graph uses message-based state, always return message objects in the expected format.
4. Your tool output is not serializable
Tool calls can also trigger parsing issues if they return classes, circular objects, or undefined.
// Wrong
return new Date();
// Right
return new Date().toISOString();
Keep tool outputs:
- •JSON-serializable
- •deterministic
- •free of
undefined - •free of class instances unless explicitly handled
How to Debug It
- •
Print the exact node output before returning it
- •Log the raw object right before
return. - •Check whether it matches your state shape exactly.
- •Example:
console.log("node output:", output);
- •Log the raw object right before
- •
Inspect the failing edge
- •Determine which node feeds into the parser or structured consumer.
- •If the failure happens after an LLM node, assume malformed output first.
- •Remove downstream nodes temporarily until the error disappears.
- •
Validate with Zod at every boundary
- •Parse model output immediately after generation.
- •Parse tool outputs before passing them forward.
- •Example:
const parsed = MySchema.safeParse(raw); if (!parsed.success) { throw new Error(parsed.error.message); }
- •
Reduce to one node and one field
- •Strip the graph down to a single input and single output.
- •If it works there, add fields back one by one.
- •This isolates whether the issue is prompt formatting, schema mismatch, or a bad tool response.
Prevention
- •Use structured outputs for anything beyond plain text.
- •Define one source of truth for your schema with Zod and reuse it across prompts, parsers, and graph state.
- •Return only serializable values from nodes and tools.
- •Add runtime validation at every boundary where data changes shape.
If you are seeing output parsing error during development, treat it as a contract problem between nodes, not a LangGraph bug. Fix the contract first, then tighten prompts and schemas so the same failure cannot come back later.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit