How to Fix 'connection timeout' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
connection-timeoutlanggraphtypescript

What connection timeout means in LangGraph

In LangGraph TypeScript, connection timeout usually means your graph tried to call an external service and never got a response before the socket deadline expired. In practice, this shows up when your model provider, vector DB, tool server, or internal API is slow, unreachable, or configured with an unrealistically low timeout.

You’ll usually hit it during graph.invoke(), graph.stream(), or inside a node that calls fetch(), OpenAI, Anthropic, Postgres, Redis, or another network dependency.

The Most Common Cause

The #1 cause is a node that makes a network call without proper timeout handling or with the wrong client configuration. In LangGraph, the graph runtime is fine; the request inside your node is what hangs until the underlying HTTP client throws something like:

  • Error: connection timeout
  • ETIMEDOUT
  • UND_ERR_CONNECT_TIMEOUT
  • Request timed out after 30000ms

Broken vs fixed pattern

BrokenFixed
Creates a new client per invocationReuses a configured client
No explicit timeoutExplicit timeout + retry policy
Hangs inside nodeFails fast with useful error
// BROKEN: node can hang on external call
import { StateGraph } from "@langchain/langgraph";

type State = { input: string; result?: string };

const graph = new StateGraph<State>()
  .addNode("callApi", async (state) => {
    const res = await fetch("https://api.example.com/process", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ input: state.input }),
    });

    const data = await res.json();
    return { result: data.output };
  })
  .addEdge("__start__", "callApi")
  .addEdge("callApi", "__end__")
  .compile();
// FIXED: explicit timeout + abort + reusable client pattern
import { StateGraph } from "@langchain/langgraph";

type State = { input: string; result?: string };

const API_TIMEOUT_MS = 10_000;

async function postWithTimeout(url: string, body: unknown) {
  const controller = new AbortController();
  const timer = setTimeout(() => controller.abort(), API_TIMEOUT_MS);

  try {
    const res = await fetch(url, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(body),
      signal: controller.signal,
    });

    if (!res.ok) {
      throw new Error(`HTTP ${res.status} from ${url}`);
    }

    return await res.json();
  } finally {
    clearTimeout(timer);
  }
}

const graph = new StateGraph<State>()
  .addNode("callApi", async (state) => {
    const data = await postWithTimeout("https://api.example.com/process", {
      input: state.input,
    });

    return { result: data.output };
  })
  .addEdge("__start__", "callApi")
  .addEdge("callApi", "__end__")
  .compile();

If you’re using SDKs like OpenAI or Anthropic inside LangGraph nodes, apply the same rule: configure timeout once at client construction instead of relying on defaults.

Other Possible Causes

1. DNS or network egress blocked

If you’re running in Docker, Kubernetes, Vercel, Lambda, or a locked-down corporate VPC, the app may not be able to reach the host at all.

// Symptom: ETIMEDOUT / ECONNREFUSED / connection timeout
await fetch("https://api.openai.com/v1/models");

Check:

  • outbound firewall rules
  • proxy settings
  • private subnet NAT
  • DNS resolution in the runtime

2. Model provider latency is too high

A slow LLM request can trigger timeouts even when connectivity is fine.

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  timeout: 15_000,
});

If your prompts are large or you’re doing multi-step tool calls, increase timeout and reduce token load.

3. Too many concurrent graph executions

If you fan out aggressively with Send/parallel nodes, you can saturate sockets and hit timeouts under load.

// Example symptom in logs:
// Error [ConnectTimeoutError]: Connect Timeout Error
// while executing multiple nodes concurrently

const maxConcurrency = 4;

Throttle execution at the app layer or reduce parallelism in your workflow.

4. Tool server endpoint is slow or misconfigured

A common LangGraph pattern is calling an internal tool API from a node. If that service has no health checks or slow cold starts, your graph pays for it.

const TOOL_URL = process.env.TOOL_URL;
// If TOOL_URL points to localhost in production, you'll get timeouts.

Verify:

  • correct environment variable values
  • service health endpoint responds quickly
  • container/service is actually listening on the expected port

How to Debug It

  1. Find the exact failing node

    • Wrap each node with logging.
    • Print start/end timestamps and the dependency being called.
    • The failing step is usually not LangGraph itself.
  2. Inspect the real error class

    • Look for ETIMEDOUT, UND_ERR_CONNECT_TIMEOUT, AbortError, or Request timed out.
    • A plain connection timeout often hides a lower-level networking failure.
  3. Reproduce outside LangGraph

    • Call the same URL from a standalone script.
    • If it fails there too, this is network/client config, not graph logic.
try {
  const res = await fetch("https://api.example.com/process");
} catch (err) {
  console.error("Direct request failed:", err);
}
  1. Reduce concurrency and add timeouts
    • Run one node at a time.
    • Set explicit timeouts on every outbound client.
    • If errors disappear, you were saturating connections or waiting too long.

Prevention

  • Set explicit timeouts on every external call used inside LangGraph nodes.
  • Reuse configured SDK clients instead of constructing them per invocation.
  • Add structured logs around each node so you can see which dependency timed out first.
  • Test your graph in the same runtime as production if you deploy to serverless or containers.

If you’re seeing connection timeout in LangGraph TypeScript, assume one thing first: some node is waiting on an external dependency longer than your runtime allows. Fix the network call path before you touch graph logic.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides