How to Fix 'connection timeout' in CrewAI (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
connection-timeoutcrewaitypescript

When you see connection timeout in CrewAI TypeScript, it usually means the agent tried to call a remote model or service and never got a response before the client gave up. In practice, this shows up during crew.kickoff(), tool calls, or when the underlying provider endpoint is slow, blocked, or misconfigured.

Most of the time this is not a CrewAI bug. It’s usually an API client timeout, a bad base URL, network egress issues, or a model/provider mismatch.

The Most Common Cause

The #1 cause is using the wrong provider configuration for your LLM or passing an endpoint that looks valid but never responds in time.

In CrewAI TypeScript, this often happens when you instantiate LLM with a custom baseUrl, but the URL points to the wrong path, wrong port, or a proxy that is not forwarding requests correctly.

Broken vs fixed

Broken patternFixed pattern
Sends requests to a dead or incorrect endpointUses the correct provider URL and sane timeout settings
Hangs until connection timeoutReturns responses normally
// Broken
import { Agent, Task, Crew } from "crewai";
import { LLM } from "crewai";

const llm = new LLM({
  model: "gpt-4o-mini",
  baseUrl: "http://localhost:11434/v1/chat/completions", // wrong: points to a route, not the API base
  apiKey: "dummy",
  timeout: 5000,
});

const agent = new Agent({
  name: "SupportAgent",
  role: "Support engineer",
  goal: "Answer customer questions",
  llm,
});

const task = new Task({
  description: "Summarize the issue",
  agent,
});

const crew = new Crew({
  agents: [agent],
  tasks: [task],
});

await crew.kickoff();
// Fixed
import { Agent, Task, Crew } from "crewai";
import { LLM } from "crewai";

const llm = new LLM({
  model: "gpt-4o-mini",
  baseUrl: "http://localhost:11434/v1", // correct base URL for OpenAI-compatible servers
  apiKey: process.env.OPENAI_API_KEY ?? "dummy",
  timeout: 30000,
});

const agent = new Agent({
  name: "SupportAgent",
  role: "Support engineer",
  goal: "Answer customer questions",
  llm,
});

const task = new Task({
  description: "Summarize the issue",
  agent,
});

const crew = new Crew({
  agents: [agent],
  tasks: [task],
});

await crew.kickoff();

If you are using OpenAI-compatible local servers like Ollama, vLLM, LM Studio, or LiteLLM, make sure baseUrl points to the API root, not /chat/completions.

Other Possible Causes

1. Network egress blocked by firewall or VPC rules

If your app runs inside Docker, Kubernetes, AWS Lambda, Azure Functions, or a locked-down corporate network, outbound traffic may be blocked.

// Example symptom
const llm = new LLM({
  model: "gpt-4o-mini",
  baseUrl: "https://api.openai.com/v1",
  apiKey: process.env.OPENAI_API_KEY!,
});

If DNS works but packets never leave the environment, you’ll get timeouts instead of clean auth errors.

2. Wrong API key or missing headers behind a proxy

Some proxies return no useful response when auth is missing or malformed.

const llm = new LLM({
  model: "gpt-4o-mini",
  baseUrl: "https://your-proxy.company.internal/v1",
  apiKey: "", // broken
});

Fix it by confirming the proxy expects Authorization: Bearer ... and that your runtime actually loads env vars.

3. Model name not supported by the endpoint

A provider can accept your connection but stall when asked for an unsupported model.

const llm = new LLM({
  model: "claude-3-5-sonnet", // broken if your OpenAI-compatible server only serves GPT-style models
  baseUrl: "http://localhost:11434/v1",
});

Use a model name that your backend actually exposes. Check /v1/models if your server supports it.

4. Tool execution hanging before the model call completes

Sometimes the timeout is not from the LLM itself. A tool may block forever and make CrewAI look stuck.

const slowTool = async () => {
  return await fetch("http://internal-service.local/report"); // no timeout set
};

Wrap external calls with explicit timeouts:

const controller = new AbortController();
setTimeout(() => controller.abort(), 8000);

const res = await fetch("http://internal-service.local/report", {
  signal: controller.signal,
});

How to Debug It

  1. Test the endpoint outside CrewAI

    • Use curl or Postman against the same baseUrl.
    • If this fails or hangs, CrewAI is not the root problem.
  2. Log the exact LLM config

    • Print model, baseUrl, and whether apiKey is present.
    • Most bad setups are obvious once you inspect runtime values.
  3. Reduce to one agent and one task

    • Remove tools.
    • Remove multi-agent orchestration.
    • If crew.kickoff() works in minimal form, one of your tools or prompts is causing delay.
  4. Increase visibility around timeouts

    • Set longer timeouts temporarily.
    • Add request logging at your proxy or gateway.
    • Look for errors like:
      • Error: Connection timeout
      • Request timed out after XXXXXms
      • FetchError
      • ECONNRESET
      • ETIMEDOUT

Prevention

  • Use a known-good OpenAI-compatible endpoint first before adding proxies or local inference servers.
  • Set explicit timeouts on both your CrewAI LLM and any custom tools that call external services.
  • Keep a small smoke test that runs crew.kickoff() against one trivial task during CI so broken endpoints fail early.

If you’re seeing connection timeout repeatedly in CrewAI TypeScript, start with baseUrl, then check network access, then inspect tools. In most cases the fix is in configuration, not in agent logic.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides