How to Fix 'intermittent 500 errors during development' in LangGraph (TypeScript)
What this error usually means
An intermittent 500 in LangGraph TypeScript usually means your graph is throwing inside a node, reducer, tool, or middleware path, but the failure only shows up for certain inputs or execution orders. In practice, this often appears during local development when state shape, async behavior, or tool responses are inconsistent.
The annoying part is that the same request may succeed once and fail the next time. That points to nondeterminism: race conditions, missing state fields, invalid message shapes, or exceptions swallowed until LangGraph wraps them as a generic server error.
The Most Common Cause
The #1 cause I see is returning an invalid state update from a node or mutating shared state in place. In LangGraph, nodes must return a partial state object that matches the graph schema. If you return undefined, mutate arrays directly, or emit the wrong field type, you can get errors like:
- •
InvalidUpdateError: Expected object with valid state updates - •
TypeError: Cannot read properties of undefined - •
HTTP 500 Internal Server Errorfrom your dev server wrapper
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Mutates state and returns nothing | Returns a new partial update |
| Assumes fields always exist | Initializes defaults |
| Uses shared mutable arrays | Uses immutable updates |
// BROKEN
import { StateGraph } from "@langchain/langgraph";
type State = {
messages?: Array<{ role: string; content: string }>;
};
const graph = new StateGraph<State>({ channels: {} });
graph.addNode("appendMessage", async (state) => {
state.messages?.push({ role: "assistant", content: "done" }); // mutation
// no return -> LangGraph sees no valid update
});
// FIXED
import { StateGraph } from "@langchain/langgraph";
type State = {
messages: Array<{ role: string; content: string }>;
};
const graph = new StateGraph<State>({ channels: {} });
graph.addNode("appendMessage", async (state) => {
return {
messages: [...(state.messages ?? []), { role: "assistant", content: "done" }],
};
});
If your graph uses reducers, the same rule applies. Don’t push into arrays in place and expect LangGraph to detect it reliably.
Other Possible Causes
1) Tool throws intermittently
A tool that depends on network timing, auth headers, or flaky test data will bubble up as a node failure.
const tools = [
async function lookupPolicy(input: { id: string }) {
const res = await fetch(`https://api.example.com/policies/${input.id}`);
if (!res.ok) throw new Error(`lookupPolicy failed with ${res.status}`);
return res.json();
},
];
Wrap it and return structured errors when possible:
async function lookupPolicy(input: { id: string }) {
try {
const res = await fetch(`https://api.example.com/policies/${input.id}`);
if (!res.ok) return { ok: false, error: `HTTP_${res.status}` };
return { ok: true, data: await res.json() };
} catch (e) {
return { ok: false, error: e instanceof Error ? e.message : "unknown_error" };
}
}
2) Message shape mismatch
LangGraph/LangChain message pipelines are strict. If one node returns plain strings where another expects message objects, you’ll get runtime failures.
// BAD
return { messages: ["hello"] };
// GOOD
import { AIMessage } from "@langchain/core/messages";
return {
messages: [new AIMessage("hello")],
};
3) Missing initial state on optional branches
A branch may only run on certain inputs, so it looks intermittent.
// BAD
if (state.user.profile.name.length > 0) {
...
}
// GOOD
if ((state.user?.profile?.name ?? "").length > 0) {
...
}
If you’re using typed state, make required fields truly required. Optional fields plus branching logic is where these bugs hide.
4) Concurrent writes to the same key
Two nodes writing to the same field without a clear reducer can produce nondeterministic results.
// BAD idea if both nodes write to `result`
graph.addEdge("nodeA", "join");
graph.addEdge("nodeB", "join");
Use separate keys or a reducer designed for merging:
type State = {
results: string[];
};
Then merge explicitly instead of letting last-write-wins decide your runtime behavior.
How to Debug It
- •
Run the failing node in isolation
- •Log input and output right before returning.
- •Verify the returned object matches your declared state type.
- •If you see
undefinedor a scalar where an object is expected, that’s your bug.
- •
Turn on verbose tracing
- •Use LangSmith if available.
- •Add local logs around each node:
console.log("node=input", JSON.stringify(state)); - •Look for the last successful node before the
500.
- •
Check for thrown exceptions inside tools
- •Search for uncaught
throw new Error(...). - •Pay attention to external calls:
- •fetch failures
- •bad JSON parsing
- •missing env vars like
OPENAI_API_KEY
- •A wrapped stack trace often ends up looking like:
- •
Error in node "toolExecutor" - •
InvalidUpdateError - •generic
500from your route handler
- •
- •Search for uncaught
- •
Validate every branch output
- •If you use conditional edges, make sure every branch returns compatible state.
- •Compare actual runtime output against your TypeScript types.
- •Add guard clauses for optional fields before dereferencing them.
Prevention
- •Keep node outputs pure and immutable.
- •Make state schema strict; avoid “maybe” fields unless they are truly optional.
- •Wrap tool calls and external APIs with explicit error handling and typed fallback values.
- •Add one integration test per branch path so intermittent failures show up before you ship.
If you’re seeing a generic 500, don’t chase LangGraph first. Start with the last node that touched state, because that’s usually where the contract was broken.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit