LlamaIndex Tutorial (Python): building conditional routing for advanced developers
This tutorial shows you how to build a conditional router in LlamaIndex that sends each query to the right tool or index based on the query content. You need this when one retriever is not enough and you want deterministic routing for things like policy docs, claims, billing, or general Q&A.
What You'll Need
- •Python 3.10+
- •
llama-index - •
llama-index-llms-openai - •
llama-index-embeddings-openai - •OpenAI API key in
OPENAI_API_KEY - •A few local text files or documents to route across
- •Basic familiarity with
VectorStoreIndex, retrievers, and query engines
Step-by-Step
- •Start by installing the packages and setting your API key. This example uses OpenAI for both embeddings and chat, so keep the environment clean before wiring the router.
pip install llama-index llama-index-llms-openai llama-index-embeddings-openai
export OPENAI_API_KEY="your-key-here"
- •Build two separate indexes for two different domains. In a real system, these would usually be different corpora like claims policy docs and customer support docs.
from llama_index.core import Document, VectorStoreIndex
claims_docs = [
Document(text="Claims are processed within 5 business days after all documents are received."),
Document(text="Fraud checks may delay claim settlement by up to 10 business days."),
]
billing_docs = [
Document(text="Invoices are generated on the first day of each month."),
Document(text="Failed payments trigger a retry after 3 days and then account review."),
]
claims_index = VectorStoreIndex.from_documents(claims_docs)
billing_index = VectorStoreIndex.from_documents(billing_docs)
- •Create query engines for each index and wrap them as tools with clear metadata. The router depends on good descriptions, so be specific about when each tool should be used.
from llama_index.core.tools import QueryEngineTool, ToolMetadata
claims_engine = claims_index.as_query_engine()
billing_engine = billing_index.as_query_engine()
claims_tool = QueryEngineTool(
query_engine=claims_engine,
metadata=ToolMetadata(
name="claims_policy",
description="Use for questions about claim processing, settlement timing, fraud checks, and claim document requirements.",
),
)
billing_tool = QueryEngineTool(
query_engine=billing_engine,
metadata=ToolMetadata(
name="billing_policy",
description="Use for questions about invoices, payment retries, failed payments, monthly billing, and account review.",
),
)
- •Define the routing logic with an LLM selector. For advanced use cases, this is where you enforce conditional behavior based on intent instead of raw semantic similarity alone.
from llama_index.core.query_engine import RouterQueryEngine
from llama_index.core.selectors import LLMSingleSelector
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o-mini", temperature=0)
selector = LLMSingleSelector.from_defaults(llm=llm)
router = RouterQueryEngine(
selector=selector,
query_engine_tools=[claims_tool, billing_tool],
)
- •Add a small conditional layer before calling the router if you need hard rules. This is useful when compliance requires certain phrases or entities to always go to a fixed path.
def route_query(query: str):
q = query.lower()
if "invoice" in q or "payment" in q or "billing" in q:
return billing_tool.query_engine.query(query)
if "claim" in q or "fraud" in q or "settlement" in q:
return claims_tool.query_engine.query(query)
return router.query(query)
print(route_query("When are invoices generated?"))
print(route_query("How long does a claim take to settle?"))
print(route_query("What happens after a failed payment?"))
Testing It
Run the script and verify that billing questions return billing-specific answers while claims questions return claims-specific answers. Then try ambiguous prompts like “What happens after I submit documents?” and check whether the router chooses the most relevant tool or falls back to your hard-coded rule set.
If you want confidence beyond manual testing, create a small test table of queries and expected routes. In production, log both the selected tool name and final answer so you can audit misroutes later.
Next Steps
- •Add a third tool for general FAQ or unstructured policy search.
- •Replace
LLMSingleSelectorwithPydanticSingleSelectorwhen you want structured routing outputs. - •Add evals around route accuracy before shipping this into a regulated workflow.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit