How to Fix 'callback not firing during development' in LlamaIndex (TypeScript)

By Cyprian AaronsUpdated 2026-04-21
callback-not-firing-during-developmentllamaindextypescript

When LlamaIndex TypeScript says your callback is not firing during development, it usually means your handler is registered in the wrong place, or the code path never reaches the event emitter at all. In practice, this shows up when you’re wiring CallbackManager, LlamaIndex query engines, or custom tools and expecting traces in dev, but nothing appears.

The annoying part is that it often works in one environment and fails in another. The usual pattern is: production bundle behaves, local dev with hot reload or a different runtime does not.

The Most Common Cause

The #1 cause is creating the callback handler, but never passing it into the active Settings, ServiceContext, or constructor that actually gets used by the query/index instance.

A lot of people do this:

Broken patternFixed pattern
Create a CallbackManager, then instantiate the index without attaching itPass the callback manager into the same config object used by the index/query engine
// Broken
import { VectorStoreIndex, CallbackManager } from "llamaindex";

const callbackManager = new CallbackManager({
  onEventStart: (event) => console.log("start", event),
  onEventEnd: (event) => console.log("end", event),
});

const index = await VectorStoreIndex.fromDocuments(docs);
// No callback manager attached here
const queryEngine = index.asQueryEngine();
await queryEngine.query("What is in these docs?");
// Fixed
import { VectorStoreIndex, CallbackManager, Settings } from "llamaindex";

const callbackManager = new CallbackManager({
  onEventStart: (event) => console.log("start", event),
  onEventEnd: (event) => console.log("end", event),
});

Settings.callbackManager = callbackManager;

const index = await VectorStoreIndex.fromDocuments(docs);
const queryEngine = index.asQueryEngine();
await queryEngine.query("What is in these docs?");

If you’re using an older API shape with ServiceContext, the same rule applies: attach callbacks to the context that is actually passed into the index.

// Broken
const serviceContext = ServiceContext.fromDefaults();
// callback created elsewhere, never attached

// Fixed
const serviceContext = ServiceContext.fromDefaults({
  callbackManager,
});

This matters because LlamaIndex emits events from internal components like RetrieverQueryEngine, VectorStoreIndex, and LLM. If your handler isn’t wired into that exact execution path, you’ll see no output even though your code “looks right”.

Other Possible Causes

1. You’re running a different runtime than you think

In dev, Next.js, Bun, tsx, Vitest, and Node ESM can behave differently around module initialization. If your callback registration happens in one module instance and the query runs in another, you get silent misses.

// config.ts
export const callbackManager = new CallbackManager({ ... });

// route.ts
import { callbackManager } from "./config";
// In dev HMR can create duplicate module instances

Fix by initializing once in a shared singleton and importing from a stable server-only module.

2. The event type you’re watching isn’t emitted by that path

Some handlers only fire for LLM calls, not retrieval. Others only fire for retrieval spans. If you expect a tool call event but only made a plain query, nothing will show.

const callbackManager = new CallbackManager({
  onEventStart: (event) => {
    if (event.eventType === "llm") console.log("LLM start");
    // but your code only triggers retrieval events
  },
});

Check whether you’re using:

  • VectorStoreIndex.asQueryEngine()
  • RetrieverQueryEngine
  • OpenAI / another LLM wrapper
  • custom tools or agents

Match your listener to the actual span type.

3. Your dev build tree-shook or stripped side effects

If you register callbacks through a module that has no direct runtime references, bundlers can optimize it away in dev builds.

// bad if never imported for side effects
setupCallbacks();

Make sure the module is imported from code that definitely executes on every request path.

4. You are mixing server and client code

LlamaIndex TypeScript should be running on the server. If your callback setup lives in a React client component, it won’t behave like Node server code.

"use client";

// Wrong place for LlamaIndex server execution
import { VectorStoreIndex } from "llamaindex";

Move indexing/querying and callback registration into:

  • API routes
  • server actions
  • backend services
  • Node workers

How to Debug It

  1. Confirm the handler is actually attached

    • Log immediately after setup.
    • Verify Settings.callbackManager or your context object is set before index creation.
  2. Check which component emits events

    • Add broad logging for all events first.
    • Then narrow down to specific classes like RetrieverQueryEngine, LLM, or VectorStoreIndex.
  3. Remove framework variables

    • Run a plain Node script with one file.
    • If callbacks work there but fail in Next.js/Vite/Bun, you’ve got a runtime/module issue.
  4. Verify the code path reaches an instrumented call

    • Put logs before and after:
      • document loading
      • index creation
      • retriever/query engine construction
      • .query() / .chat() invocation

A minimal diagnostic script should look like this:

import { VectorStoreIndex, CallbackManager, Settings } from "llamaindex";

Settings.callbackManager = new CallbackManager({
  onEventStart: (event) => console.log("START", event.eventType),
  onEventEnd: (event) => console.log("END", event.eventType),
});

console.log("before index");
const index = await VectorStoreIndex.fromDocuments(docs);
console.log("after index");

const qe = index.asQueryEngine();
console.log("before query");
await qe.query("test");
console.log("after query");

If you see "before query" but no callback logs, the issue is wiring or event type. If you don’t even see "after index", your failure is earlier than callbacks.

Prevention

  • Set up callbacks once at app startup, before any LlamaIndex objects are created.
  • Keep LlamaIndex execution on the server side only.
  • Use one small smoke test script in CI that verifies onEventStart and onEventEnd fire for a known query.
  • Avoid scattering callback registration across multiple modules; use one bootstrap file.

If you want a stable pattern for production apps:

  • initialize Settings.callbackManager
  • create indexes after initialization
  • keep dev hot-reload boundaries away from core instrumentation

That removes most “callback not firing during development” bugs before they start.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides