How to Fix 'callback not firing' in LangChain (TypeScript)
If you’re seeing callback not firing in a LangChain TypeScript app, it usually means your callback handler is registered, but the chain/LLM/tool path you expect is never actually invoking it. In practice, this shows up when using BaseCallbackHandler, CallbackManager, or callbacks on a chain and nothing lands in handleLLMStart, handleChainStart, or handleToolStart.
Most of the time, the bug is not in LangChain itself. It’s usually a mismatch between where you attach the callback and which object actually emits the event.
The Most Common Cause
The #1 cause is attaching callbacks to the wrong level of the LangChain object graph.
In LangChain JS/TS, callbacks do not always propagate the way people expect. If you attach a handler to a parent chain but the actual LLM call happens through a nested runnable, tool, or manually instantiated model, your handler may never fire.
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
| Callback attached to wrapper object that never emits events | Callback attached to the actual runnable/LLM/tool that emits events |
// BROKEN
import { ChatOpenAI } from "@langchain/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "@langchain/core/prompts";
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
class DebugHandler extends BaseCallbackHandler {
name = "debug-handler";
handleLLMStart() {
console.log("LLM started");
}
}
const model = new ChatOpenAI({
model: "gpt-4o-mini",
});
const prompt = PromptTemplate.fromTemplate("Write a haiku about {topic}");
const chain = new LLMChain({
llm: model,
prompt,
callbacks: [new DebugHandler()], // often not enough here
});
await chain.invoke({ topic: "callbacks" });
// FIXED
import { ChatOpenAI } from "@langchain/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "@langchain/core/prompts";
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
class DebugHandler extends BaseCallbackHandler {
name = "debug-handler";
handleLLMStart(_llm, _prompts) {
console.log("LLM started");
}
}
const handler = new DebugHandler();
const model = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [handler], // attach where the LLM actually runs
});
const prompt = PromptTemplate.fromTemplate("Write a haiku about {topic}");
const chain = new LLMChain({
llm: model,
prompt,
});
await chain.invoke({ topic: "callbacks" });
If you’re using modern LangChain LCEL (RunnableSequence, ChatPromptTemplate, .pipe()), the same rule applies: attach callbacks as close as possible to the runnable that actually executes.
Other Possible Causes
1) You used .call() / .predict() in older code paths with inconsistent callback wiring
Some older examples still use APIs that don’t behave like current invoke() flows. If your callback works in one place but not another, check whether you’re mixing legacy and modern APIs.
// OLD / inconsistent
await chain.call({ topic: "logs" });
// PREFERRED
await chain.invoke({ topic: "logs" });
2) Your handler method signature is wrong
LangChain won’t throw a nice compile-time error if your handler methods are shaped incorrectly. A common issue is overriding handleLLMStart with the wrong parameters or forgetting that async methods should return promises.
// BROKEN
class MyHandler extends BaseCallbackHandler {
name = "my-handler";
handleLLMStart(text: string) {
console.log(text);
}
}
// FIXED
class MyHandler extends BaseCallbackHandler {
name = "my-handler";
async handleLLMStart(llm, prompts) {
console.log("Prompts:", prompts);
}
async handleLLMEnd(output) {
console.log("Output:", output);
}
}
3) The callback is filtered out by tags or run names
If you’re using tags, metadata, or custom filtering in your observability layer, the callback may be firing but not visible where you expect.
const model = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [handler],
tags: ["billing-agent"],
});
// If your logger only watches "support-agent", you'll think nothing fired.
Also check any custom logic inside your handler:
async handleLLMStart(llm, prompts, runId, parentRunId, tags) {
if (!tags?.includes("prod")) return; // silently suppresses logs
}
4) You’re swallowing errors before callback hooks complete
If an exception happens before execution reaches the model/tool call, your callback won’t fire. This is common when prompt formatting fails or input keys are missing.
// BROKEN INPUT SHAPE
await chain.invoke({
wrongKey: "topic"
});
That can produce errors like:
- •
Error: Missing value for input variable 'topic' - •
TypeError: Cannot read properties of undefined - •
RunnableSequence failed to invoke
Fix the input contract first.
How to Debug It
- •
Verify which node should emit the event
- •Is it an
LLM, aChatModel, a tool, or a wrapper chain? - •Put the callback on that exact object first.
- •Is it an
- •
Add logging to every hook
- •Implement at least:
- •
handleChainStart - •
handleLLMStart - •
handleToolStart - •
handleError
- •
- •If only some fire, you know where execution stops.
- •Implement at least:
- •
Remove all filtering
- •Temporarily strip out:
- •tags
- •metadata conditions
- •environment-based guards
- •custom early returns in handlers
- •Temporarily strip out:
- •
Test with a minimal runnable
- •Reduce to one prompt + one model + one handler.
- •If this works, your production issue is likely propagation through nested chains or tools.
Example minimal test:
import { ChatOpenAI } from "@langchain/openai";
import { BaseCallbackHandler } from "@langchain/core/callbacks/base";
class DebugHandler extends BaseCallbackHandler {
name = "debug-handler";
async handleLLMStart(_llm, prompts) {
console.log("START", prompts);
}
async handleLLMEnd(output) {
console.log("END", output);
}
async handleError(err) {
console.error("ERROR", err);
}
}
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
callbacks: [new DebugHandler()],
});
await llm.invoke("Say hello");
Prevention
- •Attach callbacks at the lowest practical level:
- •model
- •tool
- •runnable node
Not just at the top-level chain.
- •Standardize on modern APIs:
- •use
invoke() - •avoid mixing legacy
.call()patterns unless you know exactly how callbacks flow there.
- •use
- •Keep one reusable debug handler in every service:
export class TraceCallbacks extends BaseCallbackHandler {
name = "trace-callbacks";
}
When “callback not firing” happens in LangChain TypeScript, treat it like an execution-path bug first and a callback bug second. In most cases, once you move the handler onto the actual emitting object and fix method signatures, the problem disappears fast.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit