How to Fix 'chain execution stuck' in LangChain (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
chain-execution-stucklangchaintypescript

If you’re seeing chain execution stuck in a LangChain TypeScript app, it usually means the chain started but never completed because one step never resolved, timed out, or is waiting on a callback that never returns. In practice, this shows up most often when you mix async code with Runnable, AgentExecutor, or custom tools and forget to return a promise correctly.

The error is rarely about LangChain itself being “broken.” It’s usually a deadlock in your own code path: an unresolved promise, a tool that never finishes, or a callback handler that blocks the chain from continuing.

The Most Common Cause

The #1 cause is an async function that does not return what LangChain expects. In TypeScript, this usually happens inside a custom tool, retriever, or runnable step where you call async work but forget to return the result.

Here’s the broken pattern:

BrokenFixed
```ts
import { DynamicStructuredTool } from "@langchain/core/tools";

const badTool = new DynamicStructuredTool({ name: "lookup_policy", description: "Fetch policy details", schema: z.object({ policyId: z.string() }), func: async ({ policyId }) => { const res = await fetch(https://api.example.com/policies/${policyId}); const data = await res.json(); console.log(data); // nothing returned }, }); |ts import { DynamicStructuredTool } from "@langchain/core/tools";

const goodTool = new DynamicStructuredTool({ name: "lookup_policy", description: "Fetch policy details", schema: z.object({ policyId: z.string() }), func: async ({ policyId }) => { const res = await fetch(https://api.example.com/policies/${policyId}); const data = await res.json(); return JSON.stringify(data); }, });


When the tool returns `undefined`, downstream steps can hang or fail with messages like:

- `Error: chain execution stuck`
- `Error: Tool invocation did not return a value`
- `UnhandledPromiseRejectionWarning`
- `TypeError: Cannot read properties of undefined`

The same issue shows up in custom chains and runnables:

```ts
import { RunnableLambda } from "@langchain/core/runnables";

const badStep = RunnableLambda.from(async (input: string) => {
  await someAsyncWork(input);
  // missing return
});

const goodStep = RunnableLambda.from(async (input: string) => {
  await someAsyncWork(input);
  return `processed:${input}`;
});

If your runnable doesn’t resolve to a usable value, the chain can appear stuck because the next node never gets valid input.

Other Possible Causes

1. A tool or API call never resolves

A fetch without timeout is a classic dead end.

const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10_000);

const res = await fetch(url, { signal: controller.signal });
clearTimeout(timeout);

Without an abort path, your AgentExecutor may sit forever waiting for the tool result.

2. Callback handlers are blocking execution

Custom callbacks can stall the chain if they throw or wait on slow I/O.

import { ConsoleCallbackHandler } from "@langchain/core/tracers/console";

const callbacks = [new ConsoleCallbackHandler()];

Bad pattern:

handleLLMEnd() {
  while (true) {} // blocks event loop
}

Keep handlers non-blocking and never do sync loops or heavy work inside them.

3. Infinite agent loops

An agent can keep calling tools if your prompt or tool output encourages repeated actions.

const executor = new AgentExecutor({
  agent,
  tools,
  maxIterations: 5,
});

If you leave maxIterations too high or unset in a bad prompt setup, you can get what looks like a stuck chain when it’s really looping.

4. Returning non-serializable objects

Some chains expect strings or structured JSON-compatible values. Returning raw class instances can break downstream serialization.

// Bad
return response;

// Good
return JSON.stringify({
  status: response.status,
  body: await response.text(),
});

This matters especially with ConversationalRetrievalQAChain, RetrievalQAChain, and custom LCEL pipelines where outputs are passed between nodes.

How to Debug It

  1. Isolate the exact step

    • Replace the full chain with one runnable or one tool.
    • If RunnableSequence hangs, test each step independently.
  2. Add logging before and after every async boundary

    • Log entry and exit for tools, retrievers, and callbacks.
    • If you see “entered tool” but not “exited tool,” that’s your stall point.
  3. Set timeouts on external calls

    • Wrap every HTTP request, DB query, and queue call with an abort/timeout.
    • If the error disappears after adding timeouts, you found the culprit.
  4. Check agent iteration behavior

    • Inspect whether the model keeps choosing the same tool.
    • Temporarily set maxIterations low and compare output:
      const executor = new AgentExecutor({ agent, tools, maxIterations: 2 });
      

Prevention

  • Always return explicit values from every async function used by LangChain.
  • Put timeouts on network calls and database queries.
  • Keep callback handlers side-effect free and fast.
  • Set sane limits on agents:
    • maxIterations
    • request timeouts
    • token limits

If you’re building production TypeScript agents, treat every chain step like a service boundary. Validate inputs, return deterministic outputs, and fail fast when something doesn’t complete. That’s how you avoid “chain execution stuck” before it ever hits logs.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides