How to Fix 'state not updating' in LangChain (Python)
If you’re seeing “state not updating” in LangChain, you’re usually dealing with one of two things: your chain/agent is reading from a stale variable, or your graph/state object is being mutated in a way LangChain doesn’t track. This shows up a lot when moving from simple RunnableSequence code to LangGraph-style stateful workflows.
The symptom is usually the same: the model runs, but the next step still sees the old value. In practice, that means your state update never made it into the object that the next node or chain invocation actually uses.
The Most Common Cause
The #1 cause is mutating a Python dict or list in place and expecting LangChain to notice.
LangChain and LangGraph work best when you treat state as immutable input/output. If you update a nested dict or append to a list without returning a new state object, the next step may still receive the old snapshot.
Broken vs fixed pattern
| Broken pattern | Right pattern |
|---|---|
| Mutates shared state in place | Returns a new state value |
| Easy to write, hard to debug | Explicit and predictable |
| Often fails in async / graph execution | Works reliably with LangGraph reducers |
# BROKEN
from langgraph.graph import StateGraph, END
from typing import TypedDict
class ChatState(TypedDict):
messages: list[str]
counter: int
def add_message(state: ChatState):
# In-place mutation
state["messages"].append("new message")
state["counter"] += 1
return state # looks fine, but this can still behave badly in graph flows
def read_state(state: ChatState):
print(state["messages"], state["counter"])
return state
# FIXED
from langgraph.graph import StateGraph, END
from typing import TypedDict
class ChatState(TypedDict):
messages: list[str]
counter: int
def add_message(state: ChatState):
# Return new values instead of mutating shared objects
return {
"messages": [*state["messages"], "new message"],
"counter": state["counter"] + 1,
}
def read_state(state: ChatState):
print(state["messages"], state["counter"])
return state
If you’re using LangGraph, also define reducers for fields that accumulate over time:
from typing_extensions import Annotated
from operator import add
from typing import TypedDict
class ChatState(TypedDict):
messages: Annotated[list[str], add]
counter: int
Without this, you can get behavior that looks like state not updating even though each node returned something.
Other Possible Causes
1) You forgot to pass updated variables into the next call
This is common with plain Runnable chains and sequential Python code.
# Broken
result = chain.invoke({"question": question})
answer = result["answer"]
# Next call still uses old question
followup = followup_chain.invoke({"question": question})
Fix it by passing the updated value explicitly:
result = chain.invoke({"question": question})
followup = followup_chain.invoke({"question": result["answer"]})
2) You’re mixing mutable globals with per-request state
If you store conversation data in a module-level variable, concurrent requests will overwrite each other.
# Broken
conversation_state = {"messages": []}
def handler(user_msg):
conversation_state["messages"].append(user_msg)
return llm.invoke(conversation_state["messages"])
Use request-scoped state:
def handler(user_msg):
conversation_state = {"messages": [user_msg]}
return llm.invoke(conversation_state["messages"])
3) Your LangGraph node returns the wrong shape
LangGraph expects node outputs to match the declared state keys. If you return a nested object or wrong key name, it may look like nothing changed.
# Broken
def node(state):
return {"messagez": ["oops"]} # typo: messagez
# Fixed
def node(state):
return {"messages": ["ok"]}
This often surfaces as silent failure rather than a loud exception unless validation catches it.
4) You’re using an old memory API incorrectly
A lot of “state not updating” reports come from mixing older memory classes with newer chain patterns.
For example, if you rely on ConversationBufferMemory but don’t wire it into the chain correctly, your history won’t persist:
# Broken pattern
memory = ConversationBufferMemory()
chain = LLMChain(llm=llm, prompt=prompt)
chain.invoke({"input": "hi"})
You need to attach memory through the supported interface for that chain/version, or move to explicit message passing with RunnableWithMessageHistory.
How to Debug It
- •
Print the exact object before and after each node
- •Log
id(state)and its contents. - •If
id(state)stays the same but values don’t change where expected, you’re mutating in place. - •If values change locally but not downstream, your output shape is wrong.
- •Log
- •
Check whether you are using LangGraph reducers
- •For accumulating lists like messages, use:
messages: Annotated[list[str], add] - •Without reducers, updates can get overwritten by later merges.
- •For accumulating lists like messages, use:
- •
Verify every node returns only declared keys
- •Compare returned keys against your
TypedDictor Pydantic schema. - •A typo like
messagevsmessagesis enough to make downstream nodes see stale data.
- •Compare returned keys against your
- •
Run one step at a time
- •Invoke each runnable/node separately.
- •Confirm the output of step N is exactly what step N+1 receives.
- •This isolates whether the bug is in your chain composition or your state model.
Prevention
- •Treat LangChain/LangGraph state as immutable.
- •Build new dicts/lists instead of mutating existing ones.
- •Use typed state definitions and reducers for accumulators.
- •
TypedDict, Pydantic models, andAnnotated[..., add]catch a lot early.
- •
- •Keep request data out of globals.
- •If multiple users hit your service, shared mutable objects will break predictability fast.
If you want one rule to remember: if downstream steps don’t see updates, stop mutating in place and start returning fresh state objects. That fixes most “state not updating” issues in LangChain Python projects.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit