How to Fix 'output parsing error when scaling' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-error-when-scalinglanggraphtypescript

If you’re seeing output parsing error when scaling in LangGraph, it usually means your graph is trying to turn an LLM response into a typed value, and the model returned something that does not match the schema. In TypeScript, this shows up a lot when you scale from a single happy-path prompt to multiple branches, retries, or structured outputs.

The failure is usually not “LangGraph is broken.” It’s almost always a mismatch between what your node returns and what the next step expects.

The Most Common Cause

The #1 cause is returning free-form text from a node that downstream code expects to be JSON or a typed object.

This happens a lot with StateGraph, Annotation.Root, ToolNode, or any node that feeds into structured parsing. The runtime then throws errors like:

  • OutputParserException: Could not parse LLM output
  • Error: Invalid update for channel
  • Error: Expected object with keys ... but got string

Broken vs fixed pattern

Broken patternFixed pattern
Node returns plain textNode returns a structured object matching the state schema
Downstream parser expects JSONDownstream parser receives valid JSON/object
LLM prompt says “answer naturally”LLM prompt says “return valid JSON only”
// ❌ Broken
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

const State = Annotation.Root({
  result: Annotation<string>(),
});

const graph = new StateGraph(State)
  .addNode("generate", async () => {
    const msg = await llm.invoke("Summarize this claim in one sentence.");
    return { result: msg.content as string }; // free-form text
  })
  .addEdge("__start__", "generate")
  .addEdge("generate", "__end__")
  .compile();

// ❌ Later code expects structured output
const parsed = JSON.parse(await graph.invoke({ result: "" } as any));
// ✅ Fixed
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({ model: "gpt-4o-mini" });

const State = Annotation.Root({
  result: Annotation<{ summary: string }>(),
});

const graph = new StateGraph(State)
  .addNode("generate", async () => {
    const msg = await llm.invoke(
      "Return ONLY valid JSON like {\"summary\":\"...\"} for this claim."
    );

    const parsed = JSON.parse(msg.content as string);
    return { result: parsed };
  })
  .addEdge("__start__", "generate")
  .addEdge("generate", "__end__")
  .compile();

If you’re using structured outputs, do not let the model “kind of” follow instructions. Make the contract explicit and validate it before returning from the node.

Other Possible Causes

1) Your state annotation is too narrow

If your state says a field is string, but you return an object, LangGraph will reject it.

// ❌
const State = Annotation.Root({
  answer: Annotation<string>(),
});

return { answer: { text: "done" } }; // mismatch

// ✅
const State = Annotation.Root({
  answer: Annotation<{ text: string }>(),
});

return { answer: { text: "done" } };

2) A tool node returns non-serializable data

ToolNode and custom tools should return plain JSON-safe values. Returning Date, class instances, or circular objects can trigger parsing/serialization failures during scaling.

// ❌
return {
  now: new Date(),
  client: someAxiosInstance,
};

// ✅
return {
  now: new Date().toISOString(),
};

3) Branches return inconsistent shapes

When you scale with conditional edges, every branch must satisfy the same downstream contract.

// ❌ branch A returns string, branch B returns object
if (route === "A") return { result: "ok" };
if (route === "B") return { result: { value: "ok" } };

// ✅ normalize output shape
type Result = { value: string };
if (route === "A") return { result: { value: "ok" } as Result };
if (route === "B") return { result: { value: "ok" } as Result };

4) Your parser is too strict for real model output

If you use StructuredOutputParser, Zod, or manual JSON.parse, small formatting issues will fail hard.

// ❌ strict parse without guardrails
const data = JSON.parse(modelOutput);

// ✅ validate and recover
import { z } from "zod";

const Schema = z.object({
  summary: z.string(),
});

const data = Schema.parse(JSON.parse(modelOutput));

How to Debug It

  1. Log the exact node output

    • Print what each node returns before it reaches the next edge.
    • Look for strings where objects are expected.
  2. Check the state schema first

    • Compare your Annotation.Root(...) types against actual return values.
    • If one branch returns {} and another returns { foo: "bar" }, fix that first.
  3. Isolate the failing node

    • Run nodes outside the graph with mocked input.
    • If the same parser fails in isolation, the issue is in your prompt or schema, not LangGraph.
  4. Inspect raw LLM content

    • Don’t trust msg.content blindly.
    • Log the raw response before parsing:
      console.log("RAW:", msg.content);
      
    • You’ll often see markdown fences, extra commentary, or invalid JSON.

Prevention

  • Use explicit schemas for every structured hop in your graph.
  • Return only JSON-safe values from nodes and tools.
  • Add Zod validation at boundaries instead of parsing directly inside later nodes.
  • Keep branch outputs normalized so every path produces the same shape.

If you’re building agents that will run at scale, treat every edge in LangGraph like an API boundary. The moment one node starts returning “helpful” prose instead of typed data, you’ll eventually hit an output parsing failure under load.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides