How to Fix 'callback not firing in production' in LangGraph (TypeScript)
If your LangGraph callback works locally but never fires in production, you’re usually dealing with one of three things: the graph is being invoked in a way that doesn’t propagate callbacks, the runtime is dropping async work before it flushes, or your production environment is swallowing the handler altogether. In TypeScript, this often shows up as “my handleLLMEnd / handleChainEnd never runs” even though the graph completes successfully.
The annoying part is that LangGraph itself may not throw a hard error. You’ll just see a completed run with missing traces, missing side effects, or an empty callback log.
The Most Common Cause
The #1 cause is not awaiting the graph invocation all the way through or creating a new graph/runtime per request without preserving the callback manager.
This happens a lot when people wrap graph.invoke() inside a serverless handler, fire-and-forget the promise, or return early from an HTTP route before callbacks finish flushing.
Broken vs fixed
| Broken pattern | Fixed pattern |
|---|---|
| returns before async work completes | awaits the full graph execution |
| callback manager not passed into runtime | callback manager passed explicitly |
| fire-and-forget invocation | deterministic request lifecycle |
// ❌ Broken: callback never reliably fires in production
import { StateGraph } from "@langchain/langgraph";
import { CallbackManager } from "@langchain/core/callbacks/manager";
export async function POST(req: Request) {
const cb = CallbackManager.fromHandlers({
handleLLMEnd: async (output) => {
console.log("LLM ended:", output);
},
});
const graph = buildGraph(); // assumes this returns a compiled graph
// Fire-and-forget: route may return before callbacks flush
graph.invoke(
{ question: "What is the claim status?" },
{ callbacks: [cb] }
);
return Response.json({ ok: true });
}
// ✅ Fixed: await invoke and keep callbacks in scope
import { CallbackManager } from "@langchain/core/callbacks/manager";
export async function POST(req: Request) {
const cb = CallbackManager.fromHandlers({
handleLLMEnd: async (output) => {
console.log("LLM ended:", output);
},
handleChainEnd: async (output) => {
console.log("Chain ended:", output);
},
});
const graph = buildGraph();
const result = await graph.invoke(
{ question: "What is the claim status?" },
{ callbacks: [cb] }
);
return Response.json({ ok: true, result });
}
If you’re on LangGraph with RunnableConfig, pass callbacks there too:
const config = {
callbacks: [cb],
};
await graph.invoke(input, config);
In production, especially on Vercel/Cloud Run/Lambda-style handlers, returning early can terminate pending async operations before CallbackManager handlers run.
Other Possible Causes
1) You attached callbacks to the wrong layer
LangGraph nodes often wrap LangChain runnables. If you attach callbacks to the outer graph but your actual model call lives inside a node that creates its own runnable/config, your handler may never see it.
// ❌ Wrong: inner runnable ignores outer callbacks
const node = async (state: any) => {
const llm = model.bind({});
return await llm.invoke(state.prompt); // no config passed through
};
// ✅ Right: pass config down to the inner runnable
const node = async (state: any, config?: any) => {
const llm = model.bind({});
return await llm.invoke(state.prompt, config);
};
2) Your production build strips logs or swallows errors
Sometimes the callback fires, but your logging backend doesn’t show it. This happens when console.log is buffered, log levels are too strict, or exceptions inside handlers are swallowed.
const cb = CallbackManager.fromHandlers({
handleLLMEnd: async () => {
throw new Error("testing callback path");
},
});
If that error disappears silently in prod, your handler chain is being swallowed somewhere upstream. Wrap handlers defensively:
const cb = CallbackManager.fromHandlers({
handleLLMEnd: async (output) => {
try {
await auditLogger.write(output);
} catch (err) {
console.error("callback failed", err);
}
},
});
3) You’re using streaming but only handling final completion
With LangGraph streaming APIs like stream() or streamEvents(), some events won’t map to handleChainEnd the way you expect. If you’re looking for token-level activity and only implemented end-of-run hooks, you’ll think nothing fired.
for await (const event of graph.streamEvents(input, { version: "v2" })) {
if (event.event === "on_chain_end") {
console.log("chain end", event);
}
}
If you want token events, watch for:
- •
on_chat_model_stream - •
on_llm_new_token - •
on_chain_start - •
on_chain_end
4) Node/runtime mismatch in production
A lot of “works locally” issues are just ESM/CJS or edge-runtime mismatches. LangGraph and LangChain packages expect a real Node runtime for some integrations.
Check your deployment target:
{
"runtime": "nodejs20.x"
}
Avoid edge runtimes if your callback stack depends on Node APIs like file I/O, process hooks, or certain tracing backends.
How to Debug It
- •
Prove the callback exists
- •Add a synchronous log at construction time and inside each handler.
- •If construction logs appear but handler logs do not, the issue is invocation flow.
- •
Check whether you’re awaiting execution
- •Search for
graph.invoke(withoutawait. - •Search for route handlers returning before promises settle.
- •Search for
- •
Pass a minimal callback manager
- •Use only one handler:
const cb = CallbackManager.fromHandlers({ handleChainEnd: async () => console.log("CHAIN END"), }); - •Remove tracing providers and custom wrappers until it works.
- •Use only one handler:
- •
Verify config propagation through nodes
- •Ensure every custom node accepts
config?: RunnableConfig. - •Pass that config into nested
.invoke()calls.
- •Ensure every custom node accepts
Prevention
- •Always treat LangGraph execution as fully async.
- •No fire-and-forget invocations in HTTP handlers.
- •Thread
RunnableConfigthrough custom nodes.- •If a node invokes another runnable, pass config down explicitly.
- •Test production-like runtime behavior locally.
- •Same Node version, same deployment adapter, same streaming mode.
If you’re still seeing silent failures after fixing invocation and config propagation, instrument both handleChainStart and handleChainEnd. In practice, one of them will tell you exactly where the callback chain breaks.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit