How to Fix 'output parsing error in production' in LangGraph (Python)
Opening
output parsing error in production usually means LangGraph expected a structured output from a node or model, but got something it couldn't coerce into the schema you defined. In practice, this shows up when you're using PydanticOutputParser, JsonOutputParser, tool calling, or a typed state and the model returns extra text, malformed JSON, or the wrong shape.
It tends to happen after a deployment change: new prompt, new model, different temperature, or a state schema that drifted from the actual node output.
The Most Common Cause
The #1 cause is returning free-form LLM text where LangGraph expects structured data.
If your node is supposed to return a dict matching your state schema, but you return a raw string or partially parsed JSON, LangGraph will fail during state validation or downstream parsing. You’ll usually see errors like:
- •
langgraph.errors.OutputParserException - •
langchain_core.exceptions.OutputParserException - •
ValidationErrorfrom Pydantic - •
InvalidUpdateErrorwhen the node returns the wrong state shape
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Returns raw model text | Returns validated dict/object |
| Parses JSON by string slicing | Uses structured output / explicit parser |
| Lets the LLM decide format loosely | Forces schema at the edge |
# BROKEN
from typing import TypedDict
from langgraph.graph import StateGraph
from langchain_openai import ChatOpenAI
class State(TypedDict):
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
def answer_node(state: State):
prompt = f"Answer this question: {state['answer']}"
result = llm.invoke(prompt)
# result is an AIMessage, not a dict matching State
return {"answer": result.content} # often fine until downstream expects structured output
graph = StateGraph(State)
graph.add_node("answer_node", answer_node)
# FIXED
from typing import TypedDict
from pydantic import BaseModel, Field
from langgraph.graph import StateGraph
from langchain_openai import ChatOpenAI
class AnswerSchema(BaseModel):
answer: str = Field(..., description="Final answer only")
class State(TypedDict):
question: str
answer: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
structured_llm = llm.with_structured_output(AnswerSchema)
def answer_node(state: State):
result = structured_llm.invoke(
f"Answer this question clearly: {state['question']}"
)
return {"answer": result.answer}
graph = StateGraph(State)
graph.add_node("answer_node", answer_node)
If you’re using parsers directly, keep them at the boundary and validate before returning into LangGraph state.
Other Possible Causes
1) The model returned invalid JSON
This is common with JsonOutputParser when the assistant adds commentary around the JSON.
from langchain_core.output_parsers import JsonOutputParser
parser = JsonOutputParser()
# BAD prompt behavior:
# "Sure — here's the JSON:\n{...}"
# parser chokes on extra text
# Better:
prompt = """
Return ONLY valid JSON.
No markdown.
No explanation.
"""
2) Your node returns the wrong keys for the graph state
LangGraph merges node outputs into state. If your state expects {"decision": ...} and you return {"decison": ...}, you can trigger downstream failures that look like parsing issues.
# BAD
return {"decison": "approve"} # typo
# GOOD
return {"decision": "approve"}
For typed states, this gets caught faster:
from typing import TypedDict, Literal
class State(TypedDict):
decision: Literal["approve", "reject"]
3) Tool calling is enabled, but your prompt doesn't constrain tool use correctly
When using agents with tools, the model may emit plain text instead of tool calls or final structured output. That often surfaces as an output parsing failure in agent executors.
# Example config issue
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# If you're expecting tool calls:
agent = create_react_agent(llm, tools)
# But your prompt says:
# "Just answer naturally"
# then parsing can fail when agent expects action/finish format.
4) Temperature is too high for structured extraction
At higher temperatures, models drift from strict schemas more often. For production parsing flows, keep it deterministic.
llm = ChatOpenAI(
model="gpt-4o-mini",
temperature=0,
)
If you need creativity elsewhere, split that into a separate node. Don’t mix creative generation with schema-sensitive extraction in one step.
How to Debug It
- •
Print the raw model response before parsing
- •Check whether you're getting valid JSON, markdown fences, or extra prose.
- •Log
message.contentor the fullAIMessage.
- •
Validate each node output against its expected schema
- •If you're using Pydantic, call
model_validate. - •If validation fails there, LangGraph isn't the problem; your node contract is.
- •If you're using Pydantic, call
- •
Turn off complexity
- •Remove tools.
- •Set
temperature=0. - •Replace multi-step prompts with one simple extraction prompt.
- •Re-run until you isolate which node breaks.
- •
Check graph state updates
- •Make sure every node returns only keys defined in state.
- •Watch for typos and nested shapes that don't match your
TypedDict/Pydantic model.
A good debugging loop looks like this:
raw = llm.invoke(prompt)
print("RAW:", raw.content)
parsed = parser.parse(raw.content)
print("PARSED:", parsed)
return {"answer": parsed["answer"]}
If it fails at parser.parse, fix prompting/schema. If it fails after returning into LangGraph, fix the state shape.
Prevention
- •Use
with_structured_output()or explicit parsers for any node that feeds graph state. - •Keep schema-sensitive nodes deterministic:
temperature=0, no free-form prose. - •Define your LangGraph state with
TypedDictor Pydantic and treat it like an API contract. - •Separate generation nodes from extraction nodes so one bad completion doesn't poison the whole graph.
If you're seeing this in production, assume it's a contract mismatch first. In LangGraph, most “parsing” bugs are really “the model didn't return what the graph expected” bugs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit