How to Fix 'JSON parsing error during development' in LangGraph (Python)
Opening
JSON parsing error during development in LangGraph usually means one of your nodes, tools, or model outputs returned text that LangGraph tried to treat as structured JSON, but the payload was invalid. You typically hit this when using StructuredOutputParser, JsonOutputParser, tool calling, or state updates that expect a dict but receive a string.
In practice, the failure shows up during graph execution, often after an LLM response like this:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Or inside LangGraph/LangChain plumbing:
langchain_core.output_parsers.json.JSONDecodeError
The Most Common Cause
The #1 cause is returning a plain string where LangGraph expects a JSON-serializable Python object.
This happens a lot in node functions. A node returns "done" or raw model text, but the graph state schema expects a dict update like {"status": "done"}.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Returns a string | Returns a dict |
| LangGraph cannot merge it into state | State update is explicit and serializable |
# BROKEN
from langgraph.graph import StateGraph, START, END
from typing import TypedDict
class State(TypedDict):
status: str
def set_status(state: State):
return "done" # ❌ LangGraph expects a dict-like state update
graph = StateGraph(State)
graph.add_node("set_status", set_status)
graph.add_edge(START, "set_status")
graph.add_edge("set_status", END)
app = graph.compile()
# FIXED
from langgraph.graph import StateGraph, START, END
from typing import TypedDict
class State(TypedDict):
status: str
def set_status(state: State):
return {"status": "done"} # ✅ valid state update
graph = StateGraph(State)
graph.add_node("set_status", set_status)
graph.add_edge(START, "set_status")
graph.add_edge("set_status", END)
app = graph.compile()
If you are using an LLM node with structured output, the same rule applies. If your parser expects JSON, make sure the model is actually constrained to emit JSON.
# BROKEN: free-form text sent into JSON parser
response = llm.invoke("Summarize this ticket.")
data = json.loads(response.content) # fails if content is not strict JSON
# FIXED: force structured output
structured_llm = llm.with_structured_output(TicketSummary)
result = structured_llm.invoke("Summarize this ticket.")
# result is already parsed into a Python object/dict-like structure
Other Possible Causes
1. Malformed JSON from the model
If you manually parse LLM output, even one trailing comma breaks it.
bad = '{"name": "Alice",}' # ❌ invalid JSON
good = '{"name": "Alice"}' # ✅ valid JSON
This often surfaces as:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes
2. Tool output not matching the declared schema
If your tool returns text but your graph expects structured data, parsing fails downstream.
# BROKEN
def lookup_customer(customer_id: str):
return f"Customer {customer_id} found"
# FIXED
def lookup_customer(customer_id: str):
return {"customer_id": customer_id, "found": True}
If you're using @tool, make sure the return type matches what your downstream node expects.
3. Mixing message objects and raw strings incorrectly
LangGraph nodes that work with messages should return message objects or proper state updates, not arbitrary strings stuffed into message arrays.
# BROKEN
return {"messages": ["Approved"]}
# FIXED
from langchain_core.messages import AIMessage
return {"messages": [AIMessage(content="Approved")]}
If you're using MessagesState, keep message types consistent across nodes.
4. Invalid serializer input in config or checkpointing
Sometimes the error is not from the model at all. It comes from checkpoint data or config values that were serialized incorrectly.
# BROKEN: non-serializable object in config/state
return {"metadata": set([1, 2, 3])} # sets are not JSON serializable
# FIXED:
return {"metadata": [1, 2, 3]}
This can also happen if you store custom classes directly in state instead of plain dicts/lists/strings/numbers.
How to Debug It
- •
Print the exact node output before returning it
- •Add logging inside each node.
- •Confirm whether you're returning a string, dict, or message object.
def my_node(state): result = llm.invoke(...) print(type(result), result) return result - •
Check whether the failing step expects structured output
- •Look for
JsonOutputParser,PydanticOutputParser,with_structured_output(), or typed state updates. - •If yes, validate that the payload is valid JSON or a proper Python dict.
- •Look for
- •
Isolate the failing node
- •Run nodes one by one outside the graph.
- •If
json.loads()fails standalone, the issue is upstream in prompt formatting or model output. - •If standalone works but graph execution fails, your return type is wrong for LangGraph state merging.
- •
Inspect serialization boundaries
- •Check anything stored in state:
- •custom classes
- •sets
- •datetime objects without conversion
- •raw response objects from SDKs
Convert them to primitives before returning them to the graph.
- •Check anything stored in state:
Prevention
- •
Return only JSON-serializable values from nodes:
- •
dict - •
list - •
str - •
int - •
float - •
bool - •
None
- •
- •
Use typed schemas for structured outputs:
- •Pydantic models with
with_structured_output() - •explicit response models instead of parsing free-form text manually
- •Pydantic models with
- •
Keep message handling consistent:
- •use
AIMessage,HumanMessage, andToolMessage - •do not mix raw strings into message lists
- •use
If you want to avoid this class of bug entirely, treat every LangGraph boundary as a serialization boundary. Once you do that, most “JSON parsing error during development” issues become obvious within minutes instead of hours.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit