LangChain Tutorial (Python): building conditional routing for advanced developers
This tutorial shows you how to build a conditional routing layer in LangChain that sends each user request to the right chain based on intent, complexity, or risk. You need this when one chain is not enough: for example, route simple FAQ questions to a fast retrieval chain, and route compliance-sensitive requests to a stricter path with extra checks.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •
langchain-community - •
langchain-core - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with:
- •
RunnableLambda - •prompt templates
- •chat models
- •LCEL composition
- •
Install the packages:
pip install langchain langchain-openai langchain-community langchain-core
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start by defining the two routes you want to support. In production, these are usually different chains with different prompts, tools, or guardrails.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
fast_prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise support assistant."),
("human", "{question}")
])
strict_prompt = ChatPromptTemplate.from_messages([
("system", "You are a compliance-aware assistant. Be cautious and explicit."),
("human", "{question}")
])
fast_chain = fast_prompt | llm | StrOutputParser()
strict_chain = strict_prompt | llm | StrOutputParser()
- •Add a router function that inspects the input and returns a route name. Keep this deterministic if possible; don’t use an LLM for routing unless you actually need semantic classification.
def route_question(inputs: dict) -> str:
q = inputs["question"].lower()
risky_terms = ["policy", "legal", "compliance", "regulated", "contract"]
if any(term in q for term in risky_terms):
return "strict"
if len(q.split()) < 8:
return "fast"
return "fast"
- •Wrap the router in
RunnableLambda, then dispatch to the correct chain. This gives you a single runnable interface while keeping the branching logic explicit.
from langchain_core.runnables import RunnableLambda
def dispatch(inputs: dict):
route = route_question(inputs)
if route == "strict":
return strict_chain.invoke(inputs)
return fast_chain.invoke(inputs)
router_chain = RunnableLambda(dispatch)
- •If you want cleaner composition, use
RunnableBranch. This is the better pattern when routing rules grow beyond one or two branches.
from langchain_core.runnables import RunnableBranch
router_chain = RunnableBranch(
(lambda x: any(term in x["question"].lower() for term in ["policy", "legal", "compliance", "regulated", "contract"]),
strict_chain),
fast_chain,
)
- •Run a few sample inputs and inspect the output path. In real systems, you’d also log the selected route so you can audit why a request was handled by a specific chain.
tests = [
{"question": "What is your refund policy?"},
{"question": "How do I reset my password?"},
{"question": "Can you explain this contract clause?"}
]
for item in tests:
print("=" * 80)
print("Q:", item["question"])
print("A:", router_chain.invoke(item))
Testing It
Run the script and confirm that policy or contract-related questions go through the strict prompt while generic support questions go through the fast prompt. If you use RunnableBranch, test both obvious matches and borderline cases so you know which branch wins.
For production validation, add structured logs around the router decision and compare them against expected outcomes from a small labeled test set. You want to catch false positives early, especially if your “strict” path is more expensive or has more restrictive behavior.
A good sanity check is to temporarily replace the LLM chains with mock outputs like "FAST" and "STRICT" so you can verify routing without paying model costs on every test run.
Next Steps
- •Add an LLM-based classifier as a fallback router when keyword rules are not enough.
- •Route into tool-enabled chains for workflows like account lookup, claims triage, or document extraction.
- •Persist routing decisions with LangSmith so you can debug failures and tune branch logic over time.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit