How to Fix 'output parsing error when scaling' in LangGraph (Python)
What the error means
output parsing error when scaling in LangGraph usually means one of your nodes returned data that does not match the schema LangGraph expected during graph execution. It shows up most often when you scale from a single happy-path run to parallel branches, conditional routing, or state updates from multiple nodes.
The failure is rarely “scaling” itself. It is usually a state shape problem, a reducer problem, or a node returning the wrong type under concurrent execution.
The Most Common Cause
The #1 cause is returning a full state object or the wrong field type from a node when LangGraph expects a partial update. This gets worse when multiple branches write to the same key and your state schema does not define how to merge them.
A common runtime symptom looks like this:
- •
langgraph.errors.InvalidUpdateError: Expected dict, got ... - •
langgraph.errors.OutputParserException - •
ValueError: Invalid input to chain - •
TypeError: 'str' object is not iterable
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
| Node returns a full object or wrong type | Node returns only the partial state update |
| Shared list field has no reducer | State field has an explicit merge strategy |
| Parallel writes collide | Writes are merged deterministically |
from typing import TypedDict, Annotated
import operator
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
messages: Annotated[list[str], operator.add]
result: str
# ❌ Broken: returns a string instead of a dict update
def summarize_bad(state: State):
return "summary text"
# ✅ Fixed: returns partial state update
def summarize_good(state: State):
return {"result": "summary text"}
# ❌ Broken in fan-out/fan-in if messages has no reducer
def tool_node_bad(state: State):
return {"messages": ["tool output"]}
# ✅ Fixed: reducer defined above with Annotated[list[str], operator.add]
def tool_node_good(state: State):
return {"messages": ["tool output"]}
If you are using MessagesState, the same rule applies. Each node should return a dict with only the fields it changes.
from langgraph.graph import MessagesState
def agent_node(state: MessagesState):
# bad:
# return ai_message
# good:
return {"messages": [ai_message]}
Other Possible Causes
1) Missing reducer on a shared state key
If two branches write to the same key and that key is just list or dict, LangGraph cannot safely merge it.
from typing import TypedDict
from langgraph.graph import StateGraph
class State(TypedDict):
logs: list[str] # ❌ no reducer
# fix
from typing import Annotated
import operator
class State(TypedDict):
logs: Annotated[list[str], operator.add] # ✅ merge strategy
2) A node returns None
This happens when you forget an explicit return path inside conditional logic.
def classify(state):
if state["score"] > 0.8:
return {"route": "approve"}
# ❌ implicit None here causes runtime failure
def classify_fixed(state):
if state["score"] > 0.8:
return {"route": "approve"}
return {"route": "review"}
3) Output parser mismatch from an LLM chain inside a node
If your node wraps an LLM with structured output parsing, the parser can fail before LangGraph can merge state.
Typical errors:
- •
langchain_core.exceptions.OutputParserException - •
Could not parse LLM output - •
Expected JSON object
# ❌ model returns free text but parser expects JSON
result = llm.invoke("Give me JSON with keys name and score")
# ✅ enforce structure
structured = llm.with_structured_output(MySchema)
result = structured.invoke(prompt)
return {"profile": result}
4) Returning mutable objects that get reused across branches
This is subtle. If you reuse the same list or dict instance across nodes, parallel execution can produce weird merge behavior.
shared = []
def node_a(state):
shared.append("a")
return {"logs": shared} # ❌ shared mutable object
def node_b(state):
shared.append("b")
return {"logs": shared}
# fix: create fresh objects per call
def node_a_fixed(state):
return {"logs": ["a"]}
How to Debug It
- •
Check the exact exception class
- •If you see
langgraph.errors.InvalidUpdateError, focus on what your node returned. - •If you see
OutputParserException, inspect the LLM/parser boundary first. - •If you see merge-related failures during fan-out, inspect reducers on shared keys.
- •If you see
- •
Print every node’s returned value
- •Add logging right before each
return. - •Confirm each node returns a plain
dict, not a string, list, Pydantic model, or AI message object unless your graph explicitly expects that type.
- •Add logging right before each
def debug_node(state):
out = {"result": "ok"}
print("debug_node output:", out)
return out
- •
Inspect your state schema
- •Look for keys written by more than one node.
- •Any shared list/dict needs an explicit merge strategy like
operator.addor a custom reducer.
- •
Run one branch at a time
- •Temporarily remove parallel edges and conditional routing.
- •If the error disappears, you have a merge conflict or inconsistent branch output.
- •Re-enable branches one by one until it breaks again.
Prevention
- •Define reducers for every shared collection field in your state schema.
- •Make every node return only partial updates as plain dictionaries.
- •Keep LLM parsing separate from graph orchestration so parser failures are easier to isolate.
- •Add small integration tests that execute each branch path and assert on returned state shape.
If you are hitting this in production code, start with the node outputs and state schema. In LangGraph, “scaling” errors are usually just bad assumptions about how state gets merged under concurrency.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit