How to Fix 'timeout error' in LangChain (TypeScript)
What the timeout error means
A timeout error in LangChain usually means one of two things: your model call took too long to complete, or your app waited too long for a network request, tool call, or streaming response. In TypeScript projects, this often shows up when Runnable, ChatOpenAI, or an agent chain is waiting on an upstream API that never returns fast enough.
You’ll typically hit it during long prompts, slow tools, bad network conditions, or when your timeout settings are too aggressive for the workload.
The Most Common Cause
The #1 cause is a mismatch between your request timeout and the actual latency of the model or tool. In LangChain TypeScript, people often set a low timeout on ChatOpenAI or on the underlying fetch client, then run a prompt that takes longer than expected.
Here’s the broken pattern and the fix side by side:
| Broken code | Fixed code |
|---|---|
| ```ts | |
| import { ChatOpenAI } from "@langchain/openai"; |
const llm = new ChatOpenAI({ model: "gpt-4o-mini", timeout: 5000, });
const res = await llm.invoke("Summarize this 20-page contract...");
console.log(res.content);
|ts
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-4o-mini", timeout: 30000, });
const res = await llm.invoke("Summarize this 20-page contract..."); console.log(res.content);
That `timeout: 5000` is fine for short prompts, but it’s too small for longer completions, tool calls, or slower providers. If you’re using agents, add even more headroom because each step can trigger multiple LLM calls.
If you want to make it explicit at the request level:
```ts
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
timeout: 30000,
});
const result = await llm.invoke(
[{ role: "user", content: "Analyze this policy document." }],
{
signal: AbortSignal.timeout(30000),
}
);
That keeps your app-level behavior aligned with your model client settings.
Other Possible Causes
1. Tool calls are hanging
If you’re using an agent with tools, the timeout may be caused by a tool that never resolves. LangChain will surface this as a generic request failure or a chain timeout while the real issue is inside your custom tool.
const tools = [
{
name: "slow_db_lookup",
description: "Fetch customer data",
func: async () => {
await new Promise(() => {}); // never resolves
return "ok";
},
},
];
Fix it by enforcing timeouts inside the tool:
async function withTimeout<T>(promise: Promise<T>, ms: number): Promise<T> {
return Promise.race([
promise,
new Promise<T>((_, reject) =>
setTimeout(() => reject(new Error("Tool timed out")), ms)
),
]);
}
2. The provider is slow or rate-limited
Sometimes the error is not local at all. OpenAI-compatible providers, proxies, and gateways can respond slowly under load, especially if you’re hitting rate limits.
Typical symptoms include:
- •Requests timing out only in production
- •Intermittent failures on large prompts
- •Errors after retries pile up
Check whether you’re also seeing:
- •
429 Too Many Requests - •
ETIMEDOUT - •
fetch failed - •provider-specific gateway errors
A safer config:
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
maxRetries: 2,
timeout: 30000,
});
3. Streaming response handling is blocked
If you’re streaming tokens but not consuming them correctly, your app can look like it timed out even though the model started responding. This happens when the event loop is busy or the stream handler throws.
Broken:
const stream = await llm.stream("Write a short summary.");
for await (const chunk of stream) {
// forgot to process chunks properly
}
Fixed:
const stream = await llm.stream("Write a short summary.");
for await (const chunk of stream) {
process.stdout.write(chunk.content ?? "");
}
If you’re in a server route, make sure nothing else blocks the event loop while streaming.
4. Your prompt got too large
Large context windows still take time to process. If you pass full documents into a chain without trimming or chunking them first, latency climbs fast.
Bad pattern:
await chain.invoke({
input: hugeContractText,
});
Better pattern:
await chain.invoke({
input: summarizeChunks(hugeContractText),
});
Use chunking, retrieval, or pre-summarization before calling the model.
How to Debug It
- •
Check where the timeout fires
- •Is it in
llm.invoke(), an agent step, a tool call, or HTTP middleware? - •Add logs before and after each step so you know which line stalls.
- •Is it in
- •
Inspect the exact error text
- •Look for
AbortError,ETIMEDOUT,Request timed out, or provider-specific messages. - •A LangChain wrapper error often hides the real root cause underneath.
- •Look for
- •
Disable tools and test plain LLM calls
- •Run a direct
ChatOpenAI.invoke()with a tiny prompt. - •If that works, your problem is probably in tools, retries, or prompt size.
- •Run a direct
- •
Increase timeout gradually
- •Try
5000 -> 15000 -> 30000. - •If higher values fix it consistently, you’ve confirmed latency rather than code failure.
- •Try
Prevention
- •
Set timeouts based on workload:
- •Short Q&A chains can use smaller limits.
- •Agents and document workflows need more headroom.
- •
Put timeouts around every external dependency:
- •LLM calls
- •database lookups
- •HTTP tools
- •file reads from remote storage
- •
Log request duration per chain step:
- •You want to know whether latency comes from prompt size, tool execution, or provider response time before users do.
If you keep seeing timeout error in LangChain TypeScript after fixing client timeouts, assume one of your tools or upstream services is blocking. In production systems, that’s usually where the real bug lives.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit