LangChain Tutorial (Python): building conditional routing for intermediate developers
This tutorial shows you how to build a LangChain router in Python that sends each user request to the right path based on intent. You need this when one chain is not enough and you want different handling for questions, summaries, and fallback cases without writing a full agent.
What You'll Need
- •Python 3.10+
- •
langchain - •
langchain-openai - •An OpenAI API key in
OPENAI_API_KEY - •Basic familiarity with:
- •
ChatPromptTemplate - •
RunnableLambda - •
RunnableBranch - •
StrOutputParser
- •
Install the packages:
pip install langchain langchain-openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start by defining the model and the small chains you want to route between. Keep each path focused on one job so the routing logic stays simple and testable.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
answer_prompt = ChatPromptTemplate.from_messages([
("system", "You answer technical questions clearly and concisely."),
("human", "{input}")
])
summary_prompt = ChatPromptTemplate.from_messages([
("system", "You summarize text in 3 bullet points."),
("human", "{input}")
])
fallback_prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant for general requests."),
("human", "{input}")
])
answer_chain = answer_prompt | llm | StrOutputParser()
summary_chain = summary_prompt | llm | StrOutputParser()
fallback_chain = fallback_prompt | llm | StrOutputParser()
- •Add a routing function that inspects the input and returns a label. This is the simplest reliable pattern when your routing rules are business-driven and deterministic.
def route_input(text: str) -> str:
lowered = text.lower()
if any(word in lowered for word in ["summarize", "summary", "tl;dr"]):
return "summary"
if any(word in lowered for word in ["how", "what", "why", "explain"]):
return "answer"
return "fallback"
- •Build the conditional router with
RunnableBranch. The first matching condition wins, so order matters when your rules overlap.
from langchain_core.runnables import RunnableBranch, RunnableLambda
router = RunnableBranch(
(lambda x: route_input(x["input"]) == "summary", summary_chain),
(lambda x: route_input(x["input"]) == "answer", answer_chain),
fallback_chain,
)
pipeline = RunnableLambda(lambda x: {"input": x}) | router
- •Run the pipeline with a few inputs and inspect the outputs. In production, you would replace these print calls with logs or traces, but direct output is enough to confirm routing behavior.
tests = [
"Summarize this article about LangChain routers.",
"How does RunnableBranch work?",
"Write me a polite follow-up email."
]
for text in tests:
print("\nINPUT:", text)
print("OUTPUT:", pipeline.invoke(text))
- •If you want more control, separate classification from execution. That gives you a clean place to add metrics, audit logs, or a human review step before dispatching to the final chain.
def classify_and_route(payload: dict) -> dict:
label = route_input(payload["input"])
return {"input": payload["input"], "route": label}
def dispatch(payload: dict):
if payload["route"] == "summary":
return summary_chain.invoke(payload)
if payload["route"] == "answer":
return answer_chain.invoke(payload)
return fallback_chain.invoke(payload)
classified_pipeline = RunnableLambda(classify_and_route) | RunnableLambda(dispatch)
print(classified_pipeline.invoke({"input": "Summarize this contract clause."}))
Testing It
Run the script with three types of inputs: a summarization request, a technical question, and something unrelated. You should see each request hit a different prompt template based on the keyword rules.
If everything routes correctly, your summary output should look like bullets or compressed prose, while question inputs should get direct answers. The fallback path should handle generic requests without failing.
For production testing, add unit tests around route_input() first. That function is your decision boundary, so it should be covered before you worry about model quality.
Next Steps
- •Replace keyword routing with an LLM classifier using structured output
- •Add confidence thresholds and human-in-the-loop escalation
- •Persist routing decisions to logs for auditability
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit