AutoGen Tutorial (Python): building conditional routing for intermediate developers
This tutorial shows how to build conditional routing in AutoGen so an agent can choose different execution paths based on message content, tool results, or simple business rules. You need this when one assistant is not enough and you want a controller that sends finance, support, or escalation cases to the right specialist without hardcoding every branch.
What You'll Need
- •Python 3.10+
- •
autogen-agentchatinstalled - •An OpenAI-compatible API key
- •Basic familiarity with AutoGen agents and chat messages
- •A terminal and a virtual environment
- •Optional:
python-dotenvif you want to load secrets from.env
Step-by-Step
- •Start with a small routing model: one user-facing assistant, two specialist agents, and a router function that decides where each request should go. The key idea is that routing happens in Python before you hand work to an agent.
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
finance_agent = AssistantAgent(
name="finance_agent",
model_client=None,
system_message="You handle billing, refunds, invoices, and payment issues."
)
support_agent = AssistantAgent(
name="support_agent",
model_client=None,
system_message="You handle product usage, bugs, and account access issues."
)
def route_request(text: str) -> str:
text = text.lower()
if any(word in text for word in ["refund", "invoice", "billing", "payment"]):
return "finance"
return "support"
- •Wire the router into a dispatcher function. In production, this is where you attach your actual model client; for the tutorial, the dispatch logic is the important part.
import os
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
api_key=os.environ["OPENAI_API_KEY"],
)
finance_agent.model_client = model_client
support_agent.model_client = model_client
async def dispatch(user_text: str) -> str:
route = route_request(user_text)
agent = finance_agent if route == "finance" else support_agent
result = await agent.run(task=user_text)
return result.messages[-1].content
- •Add a second condition for escalation. This pattern is useful when the first agent can answer most requests, but certain keywords or confidence signals should send the case to a human-review path or a stricter specialist.
def should_escalate(text: str) -> bool:
text = text.lower()
keywords = ["fraud", "chargeback", "legal", "complaint", "cancel my account"]
return any(keyword in text for keyword in keywords)
async def dispatch_with_escalation(user_text: str) -> str:
if should_escalate(user_text):
return "Escalate to human review."
route = route_request(user_text)
agent = finance_agent if route == "finance" else support_agent
result = await agent.run(task=user_text)
return result.messages[-1].content
- •Build a small async entry point and test multiple branches. This makes it obvious whether your routing rules are doing what you expect before you put them behind an API endpoint.
import asyncio
async def main():
samples = [
"I need a refund for last month's subscription.",
"The app crashes when I open settings.",
"I want to report fraud on my account.",
]
for sample in samples:
output = await dispatch_with_escalation(sample)
print(f"\nUSER: {sample}")
print(f"ROUTED RESPONSE: {output}")
if __name__ == "__main__":
asyncio.run(main())
- •If you want more than keyword routing, make the router itself an LLM-backed classifier. That gives you better coverage for messy user language while keeping the control flow in your codebase.
async def llm_route(user_text: str) -> str:
prompt = (
"Classify this request as exactly one of: finance, support, escalate.\n"
f"Request: {user_text}"
)
result = await model_client.create([{"role": "user", "content": prompt}])
label = result.content.strip().lower()
if label not in {"finance", "support", "escalate"}:
return "support"
return label
async def dispatch_llm_routed(user_text: str) -> str:
route = await llm_route(user_text)
if route == "escalate":
return "Escalate to human review."
agent = finance_agent if route == "finance" else support_agent
result = await agent.run(task=user_text)
return result.messages[-1].content
Testing It
Run the script with three or four test prompts that clearly hit different branches. You want one billing case, one product-support case, and one escalation case so you can confirm the router is not sending everything to the same agent.
If you use the keyword router first, verify that your conditions are ordered correctly. In real systems, broad rules like “support” can accidentally swallow more specific cases unless you check escalation and finance first.
If you switch to LLM-based routing, log both the raw classifier output and the final branch selected. That gives you traceability when someone asks why a complaint was routed to human review instead of support.
Next Steps
- •Replace keyword routing with structured classification using JSON output from your router agent.
- •Add conversation state so routing can change based on prior turns, not just the latest message.
- •Put this behind FastAPI and persist route decisions for audit logs and compliance review.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit