How to Fix 'callback not firing when scaling' in LangGraph (TypeScript)
If your LangGraph callback works locally but stops firing when you scale out workers, the problem is usually not LangGraph itself. It’s almost always a state, runtime, or wiring issue that only shows up once requests stop landing on the same process.
In TypeScript, this often appears as missing onChainStart, onChainEnd, or custom callback events when you move from a single Node process to multiple pods, serverless instances, or a queue-backed worker pool.
The Most Common Cause
The #1 cause is passing callbacks in one place, then invoking the graph in another execution context where those callbacks are not attached.
This happens a lot when people build a graph once at startup, then call .invoke() from different workers without forwarding the RunnableConfig.callbacks correctly.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| Callback attached to the wrong layer | Callback passed per invocation |
| Global singleton graph with hidden runtime state | Explicit config per request |
| Works in one process, fails when scaled | Works across pods/workers |
import { StateGraph, START, END } from "@langchain/langgraph";
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console";
type State = { input: string; output?: string };
const graph = new StateGraph<State>()
.addNode("step", async (state) => {
return { output: state.input.toUpperCase() };
})
.addEdge(START, "step")
.addEdge("step", END)
.compile();
// ❌ Broken: callbacks are not reliably attached for every execution path
const handler = new ConsoleCallbackHandler();
await graph.invoke(
{ input: "hello" },
{
// people often forget this entirely in worker code
// callbacks: [handler]
}
);
import { StateGraph, START, END } from "@langchain/langgraph";
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console";
import type { RunnableConfig } from "@langchain/core/runnables";
type State = { input: string; output?: string };
const graph = new StateGraph<State>()
.addNode("step", async (state) => {
return { output: state.input.toUpperCase() };
})
.addEdge(START, "step")
.addEdge("step", END)
.compile();
const handler = new ConsoleCallbackHandler();
const config: RunnableConfig = {
callbacks: [handler],
};
await graph.invoke({ input: "hello" }, config);
If you are using custom callback handlers, make sure they are passed at the exact call site that executes the graph. Don’t rely on module-level setup surviving worker boundaries.
Other Possible Causes
1) You are using stream() but never consuming the stream
In LangGraph and LangChain runtimes, callback events often fire while the stream is being consumed. If your code creates the stream and exits early, you may think the callback never fired.
// ❌ Broken
const stream = await graph.stream({ input: "hi" }, { callbacks: [handler] });
// nothing consumes stream
// ✅ Fixed
for await (const chunk of await graph.stream({ input: "hi" }, { callbacks: [handler] })) {
console.log(chunk);
}
2) Your callback handler is not serializable across workers
If you pass a class instance through a queue like BullMQ or a serverless payload, it gets stripped down to plain JSON. That means your BaseCallbackHandler instance never reaches the actual runtime.
// ❌ Broken: trying to enqueue a class instance
await queue.add("run-graph", {
input,
callbacks: [new MyCustomHandler()],
});
// ✅ Fixed: reconstruct handlers inside the worker
worker.process(async (job) => {
const handler = new MyCustomHandler();
return graph.invoke(job.data.input, {
callbacks: [handler],
});
});
3) You are mixing invoke() and nested runnable calls without propagating config
If a node calls another runnable internally and you don’t pass config, nested events can disappear. This is common with tool wrappers and subgraphs.
// ❌ Broken
.addNode("agent", async (state) => {
const result = await someChain.invoke({ q: state.input });
return { output: result.text };
})
// ✅ Fixed
.addNode("agent", async (state, config) => {
const result = await someChain.invoke(
{ q: state.input },
config // propagate callbacks/tags/runName/etc.
);
return { output: result.text };
})
4) You are on an old LangGraph/LangChain version with callback propagation bugs
Some older versions had rough edges around callback forwarding in composed graphs and nested runnables. If it works locally but breaks after deployment drift, check package versions first.
{
"dependencies": {
"@langchain/langgraph": "^0.2.0",
"@langchain/core": "^0.2.0"
}
}
Make sure all related packages are compatible and pinned together. Mismatched minor versions can produce weird behavior that looks like “callback not firing.”
How to Debug It
- •
Confirm whether the graph is actually running in the same process
- •Log
process.pid, pod name, or worker id before.invoke(). - •If local tests work and production doesn’t, assume process boundary issues first.
- •Log
- •
Attach a known-good handler
- •Use
ConsoleCallbackHandlerorStdOutCallbackHandler. - •If built-in handlers fire but your custom handler doesn’t, the bug is in your handler implementation.
- •Use
- •
Check whether you’re propagating config into nested calls
- •Look for internal
.invoke(),.stream(), or subgraph calls inside nodes. - •Pass
(state, config)through and forward it explicitly.
- •Look for internal
- •
Verify stream consumption
- •If you use
.stream()or.streamEvents(), fully consume the iterator. - •A half-open stream can look like missing callbacks even though execution started.
- •If you use
Prevention
- •Pass callbacks explicitly at invocation time:
- •
graph.invoke(input, { callbacks })
- •
- •Forward
configthrough every nested runnable and subgraph:- •
(state, config) => chain.invoke(input, config)
- •
- •Keep LangGraph and LangChain packages version-aligned across all services and workers.
- •In production workers, reconstruct handlers inside the worker process instead of shipping them through queues.
If you want one rule to remember: callbacks belong to runtime execution context, not application startup. Once scaling enters the picture, anything implicit becomes unreliable fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit