How to Fix 'JSON parsing error in production' in LangGraph (Python)
Opening
A JSON parsing error in production in LangGraph usually means one node returned data that another node tried to treat as structured JSON, but the payload was malformed, double-encoded, or not JSON at all. In practice, this shows up when you pass model output, tool output, or state updates through a graph edge without validating the shape first.
The failure often appears only in production because real inputs are messier than your local tests: extra text from the LLM, truncated responses, async race conditions, or a schema mismatch between nodes.
The Most Common Cause
The #1 cause is returning a string that looks like JSON instead of returning a Python dict.
In LangGraph, state updates should be Python objects that match your state schema. If you do this wrong, downstream nodes may throw errors like:
- •
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) - •
TypeError: string indices must be integers - •
langgraph.errors.InvalidUpdateError: Expected dict, got str
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Returns raw string from an LLM/tool | Parses into a dict before returning |
| Assumes JSON text is already a Python object | Validates and normalizes the payload |
# BROKEN
from typing import TypedDict
from langgraph.graph import StateGraph, END
class State(TypedDict):
customer_name: str
risk_score: int
def classify_risk(state: State):
# LLM returns a string like '{"risk_score": 7}'
llm_output = '{"risk_score": 7}'
# Wrong: returning string directly into graph state
return llm_output
graph = StateGraph(State)
graph.add_node("classify_risk", classify_risk)
# FIXED
import json
from typing import TypedDict
from langgraph.graph import StateGraph, END
class State(TypedDict):
customer_name: str
risk_score: int
def classify_risk(state: State):
llm_output = '{"risk_score": 7}'
parsed = json.loads(llm_output)
return {"risk_score": parsed["risk_score"]}
graph = StateGraph(State)
graph.add_node("classify_risk", classify_risk)
If the output comes from an LLM, do not trust it as-is. Parse it, validate it, and only then merge it into state.
Other Possible Causes
1) The model returned extra prose around the JSON
This is common with chat models when you ask for JSON but do not enforce structured output.
# Bad prompt result
response = """
Here is the result:
{"risk_score": 7}
"""
# json.loads(response) fails because of leading text
Fix by forcing structured output or extracting the JSON block first.
import json
payload = response[response.find("{"):response.rfind("}") + 1]
data = json.loads(payload)
2) Double-encoded JSON
You serialize once too many times and end up with a JSON string inside a JSON string.
import json
data = {"risk_score": 7}
bad = json.dumps(json.dumps(data)) # double encoded
good = json.dumps(data) # correct
In LangGraph pipelines, this often happens when one node returns json.dumps(...) and another node calls json.loads(...) only once.
3) Invalid state update shape in a node return
LangGraph nodes should usually return partial state updates as dicts. Returning lists, strings, or custom objects can trigger graph validation errors.
# Wrong
def enrich_state(state):
return ["risk_score", 7]
# Right
def enrich_state(state):
return {"risk_score": 7}
If you see langgraph.errors.InvalidUpdateError, check the node’s return type first.
4) Tool output is not normalized before entering the graph
Tools often return plain text or provider-specific objects. If you forward that directly into graph state, downstream nodes may fail when they expect dict-like data.
# Example tool result object / text payload
tool_result = "approved=true; score=91"
# Normalize before returning to LangGraph state
return {
"approved": True,
"score": 91,
}
For external APIs, always map raw responses into your own internal schema before passing them to graph nodes.
How to Debug It
- •
Print every node’s input and output
- •Log the exact value returned by each node.
- •Look for strings that contain JSON-looking text instead of Python dicts.
- •Check whether the failure starts at the first malformed node or later in the chain.
- •
Inspect the exception type
- •
json.decoder.JSONDecodeErrorusually means invalid JSON text. - •
langgraph.errors.InvalidUpdateErrorusually means the node returned the wrong shape. - •
TypeErroroften means a downstream node treated a string like a dict.
- •
- •
Validate against your state schema
- •Compare actual outputs with your
TypedDict, Pydantic model, or reducer expectations. - •If your graph expects
{"customer_name": str}, do not return nested strings or raw model blobs.
- •Compare actual outputs with your
- •
Isolate one edge at a time
- •Run each node manually outside LangGraph.
- •Feed fake inputs and verify outputs are valid Python objects.
- •Then reconnect nodes one by one until the bad payload appears.
A practical pattern is to add temporary guards:
def debug_node(state):
result = some_function(state)
print("DEBUG OUTPUT:", type(result), result)
if not isinstance(result, dict):
raise ValueError(f"Expected dict, got {type(result)}")
return result
Prevention
- •Use structured output from the model whenever possible.
- •Prefer Pydantic schemas or provider-native JSON mode over prompt-only formatting.
- •Normalize every external response at system boundaries.
- •Convert tool/API/LLM output into your internal dict schema before it touches graph state.
- •Add strict validation in each node.
- •Fail fast on bad types instead of letting malformed data propagate through the graph.
If you build LangGraph workflows this way, “JSON parsing error in production” stops being a mystery and becomes a simple contract violation between nodes.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit