How to Fix 'output parsing error' in LangGraph (Python)
Opening
output parsing error in LangGraph usually means one of two things: the model returned text that did not match the structured output your graph expected, or a downstream node tried to parse a value that was None, malformed, or wrapped in the wrong shape.
You typically see this when using ToolNode, PydanticOutputParser, structured responses, or a conditional edge that expects a specific state key and gets something else. The stack trace often includes langchain_core.exceptions.OutputParserException or an error from LangGraph state validation.
The Most Common Cause
The #1 cause is a mismatch between what your LLM returns and what your graph expects. In practice, this happens when you ask the model for JSON, but the model returns plain text, markdown fences, or an object missing required fields.
Here’s the broken pattern versus the fixed pattern.
| Broken | Fixed |
|---|---|
| Model returns free-form text | Model returns strict structured output |
Parser expects {"answer": "..."} | |
| Graph node passes raw string into parser | Graph node validates and normalizes output |
# BROKEN: parser expects structured JSON, but the model can return anything
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
parser = JsonOutputParser()
prompt = ChatPromptTemplate.from_messages([
("system", "Return only JSON with keys: answer, confidence"),
("user", "{question}")
])
chain = prompt | llm | parser
result = chain.invoke({"question": "What is the claim status?"})
# If the model returns:
# Sure — the claim is approved.
# you get:
# langchain_core.exceptions.OutputParserException
# FIXED: enforce structured output with a schema
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
class AnswerSchema(BaseModel):
answer: str = Field(..., description="Direct answer")
confidence: float = Field(..., ge=0.0, le=1.0)
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(AnswerSchema)
prompt = ChatPromptTemplate.from_messages([
("system", "Answer the user accurately."),
("user", "{question}")
])
chain = prompt | structured_llm
result = chain.invoke({"question": "What is the claim status?"})
# result is now an AnswerSchema instance
If you’re using LangGraph state nodes, the same idea applies. Don’t let one node emit loose text if another node expects typed state.
Other Possible Causes
1) Your state schema does not match what your node returns
LangGraph is strict about state shape. If your node returns a key that isn’t in the schema, or misses a required field used later in routing, you’ll hit confusing parsing or validation failures.
# BROKEN
from typing_extensions import TypedDict
class State(TypedDict):
question: str
answer: str
def generate(state: State):
return {"result": "approved"} # wrong key
# FIXED
def generate(state: State):
return {"answer": "approved"}
2) You are passing tool output as raw text instead of tool messages
When using ToolNode, LangGraph expects tool calls and tool results in message format. If you manually stuff tool output into a string field, later nodes may fail while trying to parse it.
# BROKEN: raw string instead of proper tool message flow
state["tool_result"] = "approved"
# FIXED: let ToolNode append ToolMessage objects to messages
from langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools=[my_tool])
3) Your conditional edge routes on missing or malformed data
A conditional edge function should return predictable labels. If it reads state["route"] and that key is missing, your graph can fail before it reaches the next node.
# BROKEN
def route(state):
return state["route"].lower() # crashes if route is None
# FIXED
def route(state):
value = state.get("route", "fallback")
return value.lower()
4) Your prompt encourages markdown fences around JSON
Models love wrapping JSON in triple backticks. That breaks parsers that expect raw JSON.
# BROKEN prompt instruction
("system", "Return JSON like ```json {\"answer\": \"...\"} ```")
# FIXED prompt instruction
("system", "Return raw JSON only. No markdown fences. No extra text.")
How to Debug It
- •
Inspect the exact exception class
- •Look for
langchain_core.exceptions.OutputParserException. - •If it’s a LangGraph state issue, you may also see
KeyError,ValidationError, or routing errors tied to your node function.
- •Look for
- •
Print each node’s input and output
- •Add logging inside every node.
- •Confirm whether the failing node receives a string, dict, list of messages, or Pydantic object.
def debug_node(state):
print("INPUT:", state)
result = {"answer": "ok"}
print("OUTPUT:", result)
return result
- •
Test the LLM response outside LangGraph
- •Call the model directly with the same prompt.
- •Verify whether it returns valid JSON or structured data before involving graph execution.
- •
Temporarily remove parsers and conditional edges
- •Replace structured parsing with plain passthrough.
- •If the graph works without parsing, your issue is schema mismatch or model formatting.
- •If it still fails, your problem is likely state shape or routing logic.
Prevention
- •
Use typed state models early:
- •Prefer
TypedDictor Pydantic models for graph state. - •Make every node return exactly what downstream nodes expect.
- •Prefer
- •
Use structured outputs instead of post-hoc parsing:
- •Prefer
with_structured_output(...)over regexes and ad hoc JSON parsing. - •Keep prompts explicit: “Return raw JSON only.”
- •Prefer
- •
Add guardrails around routing:
- •Use
.get()with defaults in conditional edge functions. - •Never assume optional keys exist unless you set them in every path.
- •Use
If you’re seeing output parsing error in LangGraph Python code, start with output shape first. In most cases, the bug is not LangGraph itself — it’s a contract mismatch between prompt, parser, and graph state.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit