How to Fix 'output parsing error during development' in LangGraph (Python)

By Cyprian AaronsUpdated 2026-04-21
output-parsing-error-during-developmentlanggraphpython

What this error means

output parsing error during development in LangGraph usually means one node returned data that did not match the schema or type the graph expected. In practice, it shows up when a model returns free-form text but your node, tool, or structured output handler expects JSON, a Pydantic model, or a specific state shape.

You’ll see this most often during local development when wiring StateGraph, ToolNode, JsonOutputParser, or with_structured_output() together.

The Most Common Cause

The #1 cause is returning the wrong shape from a node. LangGraph state updates must match the state schema exactly, and LLM outputs must be parsed into that schema before you return them.

A common broken pattern is treating raw model text like structured state.

BrokenFixed
Returns plain string into typed stateParses into dict/model before returning
Assumes model output is valid JSONForces structured output or validates first
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI

class State(TypedDict):
    query: str
    result: str

llm = ChatOpenAI(model="gpt-4o-mini")

# ❌ Broken: returns AIMessage content directly as if it were valid state
def generate(state: State):
    response = llm.invoke(f"Answer this: {state['query']}")
    return {"result": response.content}  # may be unstructured text

# ✅ Fixed: force structured output or normalize the result before returning
class Output(TypedDict):
    answer: str

structured_llm = llm.with_structured_output(Output)

def generate_fixed(state: State):
    response = structured_llm.invoke(f"Answer this: {state['query']}")
    return {"result": response["answer"]}

graph = StateGraph(State)
graph.add_node("generate", generate_fixed)
graph.set_entry_point("generate")
graph.add_edge("generate", END)
app = graph.compile()

If you’re using a parser, the same rule applies:

from langchain_core.output_parsers import JsonOutputParser

parser = JsonOutputParser()

# ❌ Broken: raw text may not be valid JSON
raw = llm.invoke("Return a JSON object with key answer")
data = parser.parse(raw.content)

# ✅ Fixed: constrain the prompt and validate the shape you expect
raw = llm.invoke(
    "Return ONLY valid JSON like {\"answer\": \"...\"}"
)
data = parser.parse(raw.content)
return {"result": data["answer"]}

The actual runtime failure often looks like one of these:

  • langchain_core.exceptions.OutputParserException
  • langgraph.errors.InvalidUpdateError
  • pydantic_core._pydantic_core.ValidationError
  • ValueError: Expected dict, got str

That combination tells you the graph received something it could not merge into state.

Other Possible Causes

1. Node returns a list instead of a dict

LangGraph nodes generally need to return partial state updates as dictionaries.

# ❌ Broken
def classify(state):
    return ["fraud", "high_risk"]

# ✅ Fixed
def classify(state):
    return {"labels": ["fraud", "high_risk"]}

If your state is typed with TypedDict or Pydantic, returning a list triggers merge failures immediately.

2. Tool output is not wrapped correctly in agent/tool flows

When using ToolNode, tool results must align with the tool calling protocol. If you manually call tools and return plain text, downstream nodes can fail parsing.

# ❌ Broken
def call_tool(state):
    result = my_tool.invoke({"account_id": "123"})
    return {"messages": [result]}  # wrong type if downstream expects AIMessage/ToolMessage

# ✅ Fixed
from langchain_core.messages import ToolMessage

def call_tool_fixed(state):
    result = my_tool.invoke({"account_id": "123"})
    return {
        "messages": [
            ToolMessage(content=str(result), tool_call_id=state["messages"][-1].tool_calls[0]["id"])
        ]
    }

If you’re using prebuilt agents, keep tool messages in the expected message format.

3. Pydantic model mismatch

If your graph state uses Pydantic and your node returns fields with wrong types, validation fails during update.

from pydantic import BaseModel

class State(BaseModel):
    score: int

# ❌ Broken
def score_node(state):
    return {"score": "high"}

# ✅ Fixed
def score_node_fixed(state):
    return {"score": 90}

This usually surfaces as a ValidationError, not just a parsing exception.

4. Prompt allows extra text around JSON

Models love adding markdown fences, explanations, or trailing comments. That breaks parsers expecting strict JSON.

prompt = """
Return JSON only:
{"decision": "approve"}
"""

# ❌ Broken model output:
# Sure — here is the JSON:
# {"decision": "approve"}

# ✅ Fix by tightening the instruction and parsing defensively
prompt = """
Return ONLY valid JSON.
No markdown.
No explanation.
Schema:
{"decision": "approve"}
"""

If you need reliability, use with_structured_output() instead of hoping prompt discipline holds under load.

How to Debug It

  1. Print every node output before returning it

    • Log the exact Python object.
    • Check whether it is a dict, list, str, or BaseMessage.
  2. Compare output keys against your state schema

    • If your state says {"result": str}, don’t return {"results": ...}.
    • Misspelled keys are a common source of silent confusion until merge time.
  3. Disable downstream nodes and isolate one step

    • Run only the failing node.
    • If it parses fine alone but fails in graph execution, the issue is usually state shape or message formatting.
  4. Inspect parser/model boundaries

    • If you use JsonOutputParser, verify raw LLM content first.
    • If you use with_structured_output(), confirm the target schema matches what you actually want stored in graph state.

A simple debug wrapper helps:

def debug_node(fn):
    def wrapper(state):
        out = fn(state)
        print("NODE OUTPUT:", type(out), out)
        return out
    return wrapper

Wrap suspicious nodes and check what they emit before LangGraph merges them.

Prevention

  • Use typed graph state from day one:
    • Prefer TypedDict or Pydantic models over loose dictionaries.
  • Force structure at the LLM boundary:
    • Use with_structured_output() for anything that must become machine-readable.
  • Keep node contracts explicit:
    • Each node should document exactly which keys it updates and what types they hold.
  • Add unit tests for node outputs:
    • Test that each node returns a dict matching your schema before wiring the full graph.

If you are seeing OutputParserException or InvalidUpdateError, stop looking at LangGraph internals first. In most cases, the bug is simply that one node returned text when the graph needed structured state.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides