AutoGen Tutorial (Python): adding tool use for beginners

By Cyprian AaronsUpdated 2026-04-21
autogenadding-tool-use-for-beginnerspython

This tutorial shows how to add a real tool to an AutoGen Python agent so it can call Python functions during a conversation. You need this when plain chat is not enough and the agent has to fetch data, compute something, or interact with your own systems.

What You'll Need

  • Python 3.10+
  • autogen-agentchat
  • autogen-ext
  • An OpenAI API key set in your environment
  • Basic familiarity with AssistantAgent and UserProxyAgent
  • A local Python file where you can run the example

Install the packages:

pip install autogen-agentchat autogen-ext

Set your API key:

export OPENAI_API_KEY="your-key-here"

Step-by-Step

  1. Start by defining a normal Python function that will become your tool. Keep it simple and deterministic so the model can use it reliably.
from typing import Annotated

def calculate_premium(
    age: Annotated[int, "Customer age"],
    vehicle_value: Annotated[float, "Vehicle value in USD"],
) -> float:
    """Estimate an annual insurance premium."""
    base_rate = 0.05
    age_factor = 1.2 if age < 25 else 1.0
    premium = vehicle_value * base_rate * age_factor
    return round(premium, 2)
  1. Wrap that function as an AutoGen tool and expose it to the assistant. AutoGen uses type hints and docstrings to build the tool schema, so make them clear.
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.function import FunctionTool

premium_tool = FunctionTool(calculate_premium, description="Estimate an annual insurance premium.")

model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
)

agent = AssistantAgent(
    name="insurance_assistant",
    model_client=model_client,
    tools=[premium_tool],
)
  1. Send a prompt that requires the tool instead of asking the model to guess. The agent will decide when to call the function based on the user request.
from autogen_agentchat.messages import TextMessage

async def main():
    result = await agent.run(
        task=TextMessage(
            content="Estimate the annual premium for a 22-year-old driver with a vehicle worth 18000 USD.",
            source="user",
        )
    )

    print(result.messages[-1].content)

if __name__ == "__main__":
    asyncio.run(main())
  1. If you want more control, inspect what happened during the run. In production, this is where you log tool calls, inputs, outputs, and any failures.
async def main():
    result = await agent.run(
        task=TextMessage(
            content="Estimate the annual premium for a 22-year-old driver with a vehicle worth 18000 USD.",
            source="user",
        )
    )

    for message in result.messages:
        print(f"{message.source}: {getattr(message, 'content', '')}")

if __name__ == "__main__":
    asyncio.run(main())
  1. Add another tool when you need a second capability. The pattern stays the same: write a typed Python function, wrap it with FunctionTool, then pass it into tools=[...].
def lookup_discount(customer_tier: Annotated[str, "Customer tier"]) -> float:
    """Return a discount rate for a customer tier."""
    discounts = {
        "bronze": 0.0,
        "silver": 0.05,
        "gold": 0.10,
    }
    return discounts.get(customer_tier.lower(), 0.0)

discount_tool = FunctionTool(lookup_discount, description="Look up customer discount rate.")

agent_with_two_tools = AssistantAgent(
    name="insurance_assistant_v2",
    model_client=model_client,
    tools=[premium_tool, discount_tool],
)

Testing It

Run the script and ask for something that clearly requires calculation, like premium estimation or discount lookup. If tool use is working, the assistant should return a concrete numeric answer rather than vague text.

You should also try an input that does not need tools to confirm normal chat still works. If you want deeper verification, print every message in result.messages and look for intermediate tool-call messages before the final response.

If the model ignores the tool, check three things first: your function signature uses type hints, your docstring describes the behavior clearly, and the prompt actually asks for something that benefits from computation.

Next Steps

  • Add error handling inside tools so bad inputs return clean exceptions or fallback values.
  • Move from single tools to multi-agent workflows when one agent should plan and another should execute.
  • Connect tools to real services like internal pricing APIs, policy databases, or claim systems instead of pure Python functions

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides