AutoGen Tutorial (Python): building conditional routing for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenbuilding-conditional-routing-for-advanced-developerspython

This tutorial shows you how to build conditional routing in AutoGen with Python so an agent can decide whether to answer directly, hand off to a specialist, or stop and ask for more context. You need this when one flat assistant is too brittle for real workflows like claims triage, KYC review, or policy Q&A where the next action depends on the content of the request.

What You'll Need

  • Python 3.10+
  • autogen-agentchat
  • autogen-ext
  • An OpenAI-compatible API key
  • Set your key in OPENAI_API_KEY
  • Basic familiarity with AutoGen agents and message passing
  • A terminal and a virtual environment

Step-by-Step

  1. Install the packages and set up a minimal runtime.
    Use the OpenAI chat completion client from autogen_ext so the example stays close to production usage.
pip install -U autogen-agentchat autogen-ext openai
export OPENAI_API_KEY="your-key-here"
  1. Create three agents: a router, a claims specialist, and a generalist.
    The router will inspect the user message and return a routing label, which your Python code will use to choose the next agent.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

router = AssistantAgent(
    name="router",
    model_client=model_client,
    system_message=(
        "You are a routing agent. Classify each request as one of: "
        "claims, general, or clarify. Reply with only the label."
    ),
)

claims_agent = AssistantAgent(
    name="claims_specialist",
    model_client=model_client,
    system_message="You handle insurance claims questions with concise, accurate answers.",
)

general_agent = AssistantAgent(
    name="generalist",
    model_client=model_client,
    system_message="You answer general insurance questions clearly and briefly.",
)
  1. Add a routing function that turns the router output into control flow.
    This is the key pattern: let the model classify intent, then keep orchestration in Python where it is deterministic and testable.
async def route_request(user_text: str) -> str:
    result = await router.run(task=user_text)
    label = result.messages[-1].content.strip().lower()
    if "claims" in label:
        return "claims"
    if "general" in label:
        return "general"
    return "clarify"
  1. Build the conditional dispatcher that calls the right specialist.
    If the router says clarify, ask for missing details instead of sending junk downstream.
async def handle_user_request(user_text: str) -> str:
    route = await route_request(user_text)

    if route == "claims":
        result = await claims_agent.run(task=user_text)
        return result.messages[-1].content

    if route == "general":
        result = await general_agent.run(task=user_text)
        return result.messages[-1].content

    return (
        "I need one more detail before I can route this: "
        "is this about a claim, policy question, billing issue, or something else?"
    )
  1. Wrap it in an executable entry point and test multiple paths.
    Keep this simple first; once routing works locally, you can plug it into FastAPI or your queue worker.
async def main():
    tests = [
        "What documents do I need to submit for a car accident claim?",
        "How does deductible work on my policy?",
        "I have a question about my account",
    ]

    for text in tests:
        response = await handle_user_request(text)
        print(f"\nUSER: {text}\nBOT: {response}")

if __name__ == "__main__":
    asyncio.run(main())

Testing It

Run the script and check that each input lands on the expected path. A claims-related prompt should go to claims_specialist, a broad policy question should go to generalist, and vague input should trigger the clarification response.

If routing looks inconsistent, inspect result.messages[-1].content from the router call first. In practice, most bugs here come from overly loose labels or prompts that allow extra text instead of a single token.

For production, add logging around both the classification label and final agent selection. That gives you traceability when someone asks why a customer was sent to a specific workflow.

Next Steps

  • Replace string matching with structured JSON output from the router for stricter control.
  • Add more routes like billing, fraud, and escalation, then map them to separate agents.
  • Put this behind an API endpoint and add unit tests for routing decisions using fixed prompts.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides