LlamaIndex Tutorial (TypeScript): building conditional routing for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
llamaindexbuilding-conditional-routing-for-intermediate-developerstypescript

This tutorial shows you how to build a conditional router in LlamaIndex TypeScript that sends each user query to the right workflow branch based on simple rules. You need this when one agent or index is not enough and you want deterministic routing for things like billing, claims, policy lookup, or general support.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or equivalent
  • llamaindex installed
  • An OpenAI API key set in your environment
  • Basic familiarity with LlamaIndex Document, VectorStoreIndex, and QueryEngine
  • A few sample documents to route across

Install the package:

npm install llamaindex

Set your API key:

export OPENAI_API_KEY="your_key_here"

Step-by-Step

  1. Start by creating two small knowledge bases, one for billing and one for claims. In production, these would usually be separate indexes, tools, or services.
import {
  Document,
  VectorStoreIndex,
  Settings,
} from "llamaindex";

Settings.llm.model = "gpt-4o-mini";

const billingDocs = [
  new Document({
    text: "Billing support covers invoices, refunds, payment failures, and subscription changes.",
    metadata: { domain: "billing" },
  }),
];

const claimsDocs = [
  new Document({
    text: "Claims support covers filing claims, claim status, required documents, and payout timelines.",
    metadata: { domain: "claims" },
  }),
];

const billingIndex = await VectorStoreIndex.fromDocuments(billingDocs);
const claimsIndex = await VectorStoreIndex.fromDocuments(claimsDocs);
  1. Build query engines from each index. These are the targets your router will call after deciding where the question belongs.
const billingEngine = billingIndex.asQueryEngine();
const claimsEngine = claimsIndex.asQueryEngine();

type RouteName = "billing" | "claims" | "fallback";

function routeQuery(query: string): RouteName {
  const q = query.toLowerCase();

  if (
    q.includes("invoice") ||
    q.includes("refund") ||
    q.includes("payment") ||
    q.includes("subscription")
  ) {
    return "billing";
  }

  if (
    q.includes("claim") ||
    q.includes("payout") ||
    q.includes("file") ||
    q.includes("documents")
  ) {
    return "claims";
  }

  return "fallback";
}
  1. Add a fallback path for questions that do not match either branch. This prevents silent failures and gives you a place to handle generic support or escalation.
async function handleFallback(query: string) {
  return {
    source: "fallback",
    answer: `I couldn't route this query confidently: "${query}". Please contact support or refine the request.`,
  };
}

async function routeAndAnswer(query: string) {
  const route = routeQuery(query);

  if (route === "billing") {
    const response = await billingEngine.query({ query });
    return { source: route, answer: response.toString() };
  }

  if (route === "claims") {
    const response = await claimsEngine.query({ query });
    return { source: route, answer: response.toString() };
  }

  return handleFallback(query);
}
  1. Wire everything together with a few test queries. This gives you an end-to-end conditional router you can run locally before plugging it into an API or agent workflow.
async function main() {
  const queries = [
    "How do I get a refund on my subscription?",
    "What documents do I need to file a claim?",
    "Can you help me reset my password?",
  ];

  for (const query of queries) {
    const result = await routeAndAnswer(query);
    console.log("\nQuery:", query);
    console.log("Route:", result.source);
    console.log("Answer:", result.answer);
  }
}

main().catch(console.error);
  1. If you want better routing quality, replace keyword matching with an LLM-based classifier later. Keep the same interface so your downstream code does not change when routing logic gets smarter.
type RouterDecision = {
  route: RouteName;
};

async function decideRouteWithRules(query: string): Promise<RouterDecision> {
  const route = routeQuery(query);
  return { route };
}

async function routeAndAnswerStable(query: string) {
  const decision = await decideRouteWithRules(query);

  switch (decision.route) {
    case "billing": {
      const response = await billingEngine.query({ query });
      return { source: decision.route, answer: response.toString() };
    }
    case "claims": {
      const response = await claimsEngine.query({ query });
      return { source: decision.route, answer: response.toString() };
    }
    default:
      return handleFallback(query);
  }
}

Testing It

Run the script and confirm each sample question lands on the expected branch. The refund question should hit billing, the claim-document question should hit claims, and the password question should fall back.

If routing looks wrong, add logging inside routeQuery() and inspect the normalized input before matching. In production, keep a metrics counter per route so you can see which branch is getting traffic and which queries are ending up in fallback.

A good test is to add edge cases like “payment declined,” “claim payout delay,” and “update my plan.” You want to verify that routing stays predictable as your keyword sets grow.

Next Steps

  • Replace rule-based routing with an LLM classifier using structured output
  • Add confidence thresholds and human escalation for low-confidence routes
  • Split each branch into separate tools so the router can dispatch across APIs, not just indexes

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides