AutoGen Tutorial (Python): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
autogenadding-tool-use-for-advanced-developerspython

This tutorial shows how to add tool use to an AutoGen agent in Python using a real function-calling workflow. You need this when your agent must do more than chat: fetch data, query internal systems, validate inputs, or trigger deterministic actions without hand-rolling orchestration.

What You'll Need

  • Python 3.10+
  • pyautogen installed
  • An OpenAI-compatible API key
  • Access to a model that supports tool/function calling
  • Basic familiarity with AutoGen AssistantAgent and UserProxyAgent
  • A terminal and a Python file to run the example

Step-by-Step

  1. Start by installing AutoGen and setting your API key. Keep the model config explicit so you can swap providers later without rewriting the agent logic.
pip install pyautogen
import os

os.environ["OPENAI_API_KEY"] = "your-api-key-here"
  1. Define a tool as a normal Python function with a clean signature and type hints. In production, keep tool logic deterministic and side-effect free unless you explicitly want the agent to trigger external actions.
from typing import Annotated

def get_policy_status(policy_id: Annotated[str, "Policy identifier"]) -> str:
    mock_db = {
        "POL123": "Active",
        "POL456": "Lapsed",
        "POL789": "Pending underwriting",
    }
    return mock_db.get(policy_id.upper(), "Policy not found")
  1. Register the tool with an assistant agent using register_for_llm, then expose the same function to the user proxy for execution. This is the core pattern: the model decides when to call the tool, and Python actually runs it.
from autogen import AssistantAgent, UserProxyAgent

llm_config = {
    "config_list": [
        {
            "model": "gpt-4o-mini",
            "api_key": os.environ["OPENAI_API_KEY"],
        }
    ],
    "temperature": 0,
}

assistant = AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
)

assistant.register_for_llm(name="get_policy_status", description="Look up policy status.")(
    get_policy_status
)
user_proxy.register_for_execution(name="get_policy_status")(get_policy_status)
  1. Send a prompt that clearly requires tool use. If the model is configured correctly, it will call the function instead of guessing the answer.
result = user_proxy.initiate_chat(
    assistant,
    message="Check policy POL456 and tell me its current status.",
)

print(result.summary)
  1. Add a second tool if you want to see how AutoGen handles multiple capabilities in one agent loop. This is where advanced developers usually start building real assistants: one agent, several bounded tools, strict outputs.
def calculate_premium(base_premium: float, risk_factor: float) -> float:
    return round(base_premium * risk_factor, 2)

assistant.register_for_llm(
    name="calculate_premium",
    description="Calculate premium from base premium and risk factor.",
)(calculate_premium)
user_proxy.register_for_execution(name="calculate_premium")(calculate_premium)

result = user_proxy.initiate_chat(
    assistant,
    message="Use calculate_premium for base premium 1200.0 and risk factor 1.15.",
)

print(result.summary)

Testing It

Run the script and watch for a tool call in the conversation trace rather than a direct natural-language answer. If you see Policy not found for known IDs, check that both registration decorators point to the same function name.

A good sanity test is to ask for both known and unknown policy IDs so you can confirm the agent is actually invoking the function each time. Also verify that temperature=0 keeps behavior stable while you are validating tool routing.

If you want stricter observability, log inside each tool function before returning values. In production systems, that gives you an audit trail for every model-driven action without depending on LLM output alone.

Next Steps

  • Add input validation with Pydantic before your tool executes business logic.
  • Replace mock data with real service calls through internal APIs or databases.
  • Learn AutoGen group chat patterns so multiple agents can share tools safely.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides