How to Fix 'callback not firing in production' in LangChain (TypeScript)
What this error usually means
If your LangChain callback works locally but “stops firing in production,” the problem is usually not LangChain itself. It means the callback handler was never attached to the runnable/chain, got garbage-collected, or was registered in a way that only worked in your dev path.
In TypeScript, this shows up most often with RunnableSequence, ChatOpenAI, AgentExecutor, or stream()/invoke() calls where callbacks are passed in the wrong place.
The Most Common Cause
The #1 cause is attaching callbacks at construction time and then assuming they’ll fire for every invocation path. In LangChain JS/TS, callbacks should usually be passed per-call via config.callbacks, especially when you’re using invoke, stream, or nested runnables.
Here’s the broken pattern:
| Broken | Fixed |
|---|---|
| Callbacks attached only on constructor, then ignored in production path | Callbacks passed explicitly in invoke(..., { callbacks }) |
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
class LogHandler extends BaseCallbackHandler {
name = "log-handler";
async handleLLMEnd() {
console.log("LLM finished");
}
}
// Broken: looks fine, but production code may bypass this handler
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [new LogHandler()],
});
const chain = RunnableSequence.from([
llm,
new StringOutputParser(),
]);
// In some production paths, people call invoke without config callbacks
await chain.invoke("Hello");
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
class LogHandler extends BaseCallbackHandler {
name = "log-handler";
async handleLLMEnd() {
console.log("LLM finished");
}
}
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});
const chain = RunnableSequence.from([
llm,
new StringOutputParser(),
]);
// Right: attach callbacks at call time
await chain.invoke(
"Hello",
{
callbacks: [new LogHandler()],
}
);
If you’re using agents, the same rule applies. A very common runtime symptom is that the chain executes and you see output, but your handler methods like handleLLMStart, handleChainEnd, or handleToolEnd never run.
Other Possible Causes
1) You used the wrong callback method names
LangChain won’t call arbitrary methods. If you implement onStart instead of handleLLMStart, nothing happens.
// Broken
class BadHandler extends BaseCallbackHandler {
name = "bad-handler";
async onStart() {
console.log("never called");
}
}
// Fixed
class GoodHandler extends BaseCallbackHandler {
name = "good-handler";
async handleLLMStart() {
console.log("LLM started");
}
async handleLLMEnd() {
console.log("LLM ended");
}
}
2) Your handler is being dropped by serialization or process boundaries
This happens in serverless and worker setups. If you create the handler in one module and pass only plain JSON across a queue or RPC boundary, the class instance is gone.
// Broken: passing plain config across boundaries
await queue.send({
prompt: "Check claim",
callbacks: [{ name: "log-handler" }],
});
// Fixed: instantiate inside the runtime that executes LangChain
const handler = new LogHandler();
await chain.invoke("Check claim", { callbacks: [handler] });
3) You’re calling a method that doesn’t propagate callbacks
Some wrappers call lower-level methods without forwarding config. This is common when teams build helper functions around invoke.
// Broken
async function runPrompt(prompt: string) {
return await chain.invoke(prompt); // no callbacks forwarded
}
// Fixed
async function runPrompt(prompt: string, callbacks?: any[]) {
return await chain.invoke(prompt, { callbacks });
}
4) Streaming code exits before handlers flush
If your production code starts a stream and immediately returns or closes the request, you can miss final events like handleLLMEnd.
// Broken
const stream = await chain.stream("Hello");
return new Response(stream); // if lifecycle is wrong, events may not flush
// Fixed: keep the stream alive until completion and ensure async iteration is consumed correctly
for await (const chunk of await chain.stream("Hello", { callbacks })) {
process.stdout.write(chunk);
}
How to Debug It
- •
Confirm which event is missing
- •Add logs to every handler method:
- •
handleLLMStart - •
handleLLMEnd - •
handleChainStart - •
handleChainEnd - •
handleToolStart - •
handleToolEnd
- •
Verify callbacks are passed at invocation
- •Check the exact call site:
await chain.invoke(input, { callbacks: [handler] });- •If you only set them on constructors, test with per-call config.
- •
Test with a minimal direct model call
- •Bypass agents and tools.
- •Call
ChatOpenAI.invoke()directly with one handler. - •If that works, the bug is in your wrapper or agent flow.
- •
Inspect production runtime differences
- •Serverless cold starts.
- •Multiple Node processes.
- •ESM/CJS mismatch.
- •Tree-shaken imports causing a different class instance than local dev.
A useful sanity check is to log the callback array right before invocation:
console.log("callbacks:", callbacks?.map((c) => c.name));
await chain.invoke(input, { callbacks });
If that prints an empty array in prod, you found the issue.
Prevention
- •Pass handlers at invocation time unless you have a strong reason not to.
- •Keep callback wiring close to execution code; avoid hiding it inside wrappers that drop config.
- •Add one integration test that asserts a real event fires:
- •for example, verify
handleLLMEndincrements a counter during an actualinvoke()call.
- •for example, verify
If you’re building agents for production systems like claims triage or KYC review, treat callback plumbing as part of execution correctness, not logging sugar. When it breaks, observability breaks first, then debugging gets expensive fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit