How to Fix 'timeout error during development' in LangGraph (TypeScript)
Opening
A timeout error during development in LangGraph usually means your graph is waiting on a node, tool call, model request, or server response longer than the runtime allows. In TypeScript projects, this shows up most often while running local dev servers, hot reload, or when you call a graph from an API route that has a short execution window.
The important part: this is usually not a LangGraph bug. It’s almost always a slow node, an unbounded loop, or a runtime timeout from Node, Next.js, Vercel, or your model provider.
The Most Common Cause
The #1 cause is an agent loop that never reaches a stopping condition.
In LangGraph, this often happens when your shouldContinue logic keeps routing back to the same node because the state never changes in the way you expect. The result is repeated tool/model calls until you hit something like:
- •
Error: Graph execution timed out - •
TimeoutError: Request timed out - •
LangGraphError: Exceeded maximum number of steps
Broken vs fixed pattern
| Broken pattern | Fixed pattern |
|---|---|
Keeps routing back to agent forever | Stops when there are no more tool calls |
| No max step guard | Explicit termination condition |
| State does not update correctly | State is updated and checked before looping |
// BROKEN
import { StateGraph, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
type AgentState = {
messages: any[];
};
const shouldContinue = (state: AgentState) => {
// Bug: this always returns "agent" even after the model answered
return "agent";
};
const graph = new StateGraph<AgentState>()
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [...state.messages, response] };
})
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue, {
agent: "agent",
end: END,
})
.compile();
// FIXED
import { StateGraph, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
type AgentState = {
messages: any[];
};
const shouldContinue = (state: AgentState) => {
const lastMessage = state.messages[state.messages.length - 1];
const hasToolCall = lastMessage?.tool_calls?.length > 0;
return hasToolCall ? "agent" : END;
};
const graph = new StateGraph<AgentState>()
.addNode("agent", async (state) => {
const response = await model.invoke(state.messages);
return { messages: [...state.messages, response] };
})
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue, {
agent: "agent",
[END]: END,
})
.compile();
If you are using tool calling, make sure your tool results are actually appended to state. A missing tool result can make the LLM keep asking for the same tool forever.
Other Possible Causes
1. Your API route times out before LangGraph finishes
This is common in Next.js route handlers and serverless environments.
export const maxDuration = 5; // Vercel / Next.js may cut off long requests
export async function POST(req: Request) {
const result = await graph.invoke(await req.json());
return Response.json(result);
}
Fix by increasing duration where supported:
export const maxDuration = 30;
Or move long-running graph execution to a background worker.
2. A tool call is hanging
If one tool waits on an external service with no timeout, the whole graph stalls.
const searchTool = async (query: string) => {
const res = await fetch(`https://api.example.com/search?q=${query}`);
return res.json();
};
Add an explicit timeout:
const searchTool = async (query: string) => {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), 8000);
try {
const res = await fetch(`https://api.example.com/search?q=${query}`, {
signal: controller.signal,
});
return await res.json();
} finally {
clearTimeout(timer);
}
};
3. The LLM request itself is too slow
Large prompts, huge message histories, or slow providers can trigger timeouts.
const model = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
Trim state before each call:
const recentMessages = state.messages.slice(-10);
const response = await model.invoke(recentMessages);
Also set provider-level timeouts if supported by your SDK.
4. You have recursive graph behavior without a step limit
LangGraph will keep executing until it reaches END, so bad routing logic becomes an infinite loop.
const workflow = new StateGraph<MyState>()
.addNode("a", nodeA)
.addNode("b", nodeB)
Add a hard stop in your state:
type MyState = {
messages: any[];
steps: number;
};
if (state.steps >= 5) return END;
If you see errors like Exceeded maximum number of steps, this is the one to inspect first.
How to Debug It
- •
Log every node boundary
- •Add logs at entry and exit of each node.
- •You want to know exactly which node starts hanging.
.addNode("agent", async (state) => { console.log("[agent] start"); const result = await model.invoke(state.messages); console.log("[agent] end"); return { messages: [...state.messages, result] }; }) - •
Check whether the loop terminates
- •Inspect your conditional edge function.
- •Verify it can return
ENDunder real input conditions.
- •
Isolate external calls
- •Temporarily replace tools and model calls with mocked responses.
- •If the timeout disappears, the bottleneck is outside LangGraph.
- •
Measure execution time per step
- •Wrap tools and nodes with timing logs.
- •Find the slowest hop instead of guessing.
const start = Date.now(); const result = await someTool(input); console.log(`tool took ${Date.now() - start}ms`);
Prevention
- •Add explicit termination conditions for every looped edge path.
- •Put timeouts on every external dependency:
- •LLM calls
- •HTTP fetches
- •database queries
- •Keep graph state small:
- •trim message history
- •avoid passing large documents through every node
- •Use step counters in production graphs so runaway recursion fails fast instead of burning request time.
If you are seeing timeout error during development specifically in TypeScript LangGraph code, start with routing logic first. In practice, that’s where most of these failures come from.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit