How to Fix 'callback not firing during development' in LangGraph (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
callback-not-firing-during-developmentlanggraphtypescript

When you see callback not firing during development in a LangGraph TypeScript app, it usually means your graph is running, but the callback path you expected is never being reached. In practice, this shows up during local dev when you’re using stream(), invoke(), or a custom checkpointer and the callback handler is attached in the wrong place.

Most of the time, the graph is fine. The bug is in how the callback is wired, how the runtime is started, or how your dev server reloads modules.

The Most Common Cause

The #1 cause is attaching callbacks to the wrong execution path. In LangGraph JS/TS, people often pass a callback handler to a node function or expect invoke() to trigger streaming callbacks automatically. It won’t.

Here’s the broken pattern:

import { StateGraph, START, END } from "@langchain/langgraph";
import { CallbackManager } from "@langchain/core/callbacks/manager";

type State = { input: string; output?: string };

const graph = new StateGraph<State>()
  .addNode("callModel", async (state) => {
    // Wrong: this does not wire graph-level callbacks
    return { output: state.input.toUpperCase() };
  })
  .addEdge(START, "callModel")
  .addEdge("callModel", END)
  .compile();

const callbackManager = CallbackManager.fromHandlers({
  handleLLMNewToken(token) {
    console.log("token:", token);
  },
});

await graph.invoke(
  { input: "hello" },
  {
    callbacks: [callbackManager], // Often ineffective for what people expect
  }
);

And here’s the fixed pattern:

import { StateGraph, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { CallbackManager } from "@langchain/core/callbacks/manager";

type State = { input: string; output?: string };

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
});

const graph = new StateGraph<State>()
  .addNode("callModel", async (state, config) => {
    const res = await model.invoke(
      state.input,
      config // pass run config through to the runnable
    );
    return { output: res.content.toString() };
  })
  .addEdge(START, "callModel")
  .addEdge("callModel", END)
  .compile();

const callbackManager = CallbackManager.fromHandlers({
  handleLLMNewToken(token) {
    console.log("token:", token);
  },
});

await graph.invoke(
  { input: "hello" },
  {
    callbacks: [callbackManager],
  }
);

The key difference is this:

  • The graph-level config only helps if your node forwards it into the underlying runnable.
  • If your node just does plain async work, there are no LangChain events to emit.
  • If you want token callbacks, you need a model call that supports streaming or callback events.

Other Possible Causes

1. You’re using invoke() instead of stream()

If your code expects incremental events, invoke() will only return once the run finishes.

// Broken
await graph.invoke(input, {
  callbacks: [callbackManager],
});

// Fixed
for await (const chunk of graph.stream(input, {
  callbacks: [callbackManager],
})) {
  console.log(chunk);
}

If you’re waiting for handleLLMNewToken, make sure the underlying model call is actually streaming too.

2. Your dev server is hot-reloading and dropping handler state

In Next.js, Vite, or tsx watch mode, module reloads can recreate the graph while keeping stale references elsewhere. That produces symptoms like:

  • callback registered once
  • graph recompiled on reload
  • old handler never receives events

Fix by creating handlers per request or per run:

export async function runGraph(input: unknown) {
  const callbackManager = CallbackManager.fromHandlers({
    handleChainStart() {
      console.log("run started");
    },
  });

  return graph.invoke(input, {
    callbacks: [callbackManager],
  });
}

Avoid singleton handlers tied to module scope during development.

3. Your node swallows errors before LangGraph can emit them

If your node catches everything and returns fallback data, you may never see the event chain you expect.

// Broken
.addNode("step", async () => {
  try {
    throw new Error("boom");
  } catch {
    return { ok: false };
  }
})

// Fixed
.addNode("step", async () => {
  throw new Error("boom");
})

LangGraph will surface failures more predictably when errors are allowed to propagate.

4. You’re mixing RunnableConfig and custom params incorrectly

A common TypeScript mistake is passing config in a shape LangGraph doesn’t forward.

// Broken
await graph.invoke(input, {
  callbackManager,
});

// Fixed
await graph.invoke(input, {
  callbacks: [callbackManager],
});

Also check that your node signature accepts config when you need to forward runtime settings:

.addNode("step", async (state, config) => {
  return model.invoke(state.input, config);
})

How to Debug It

  1. Confirm whether the problem is in LangGraph or in your model call

    • Add logs at each node entry.
    • If node logs appear but callbacks don’t, your issue is wiring.
    • If node logs don’t appear, your edges or start node are wrong.
  2. Check whether you’re using streaming-capable execution

    • If you expect token-level events like handleLLMNewToken, use a streaming model and stream() on the graph.
    • invoke() will not behave like an event stream.
  3. Inspect what reaches your node

    • Log the second argument:
      .addNode("step", async (state, config) => {
        console.log(config);
        return {};
      })
      
    • If callbacks are missing there, they were not forwarded correctly.
  4. Reduce to a single-node graph

    • Remove checkpointers, middleware, tool nodes, and extra branches.
    • Reproduce with one model call and one callback handler.
    • Once it works there, add pieces back one by one.

Prevention

  • Pass config through every node that calls another runnable.
  • Use stream() when you need incremental events; use invoke() for final results only.
  • Keep callback handlers scoped per request in dev servers so hot reload doesn’t leave you with stale instances.
  • Don’t catch-and-hide errors inside nodes unless you also log them explicitly.

If you still see no callback activity after checking these points, the bug is usually not LangGraph itself. It’s almost always one of these three things:

  • no actual LLM event being emitted
  • config not forwarded into the runnable
  • runtime reload breaking handler lifetime

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides