AutoGen Tutorial (Python): building custom tools for beginners
This tutorial shows you how to build a custom tool in AutoGen, wire it into an assistant agent, and call it from a conversation. You need this when the built-in LLM alone is not enough and you want your agent to query internal logic, validate inputs, or trigger deterministic Python code.
What You'll Need
- •Python 3.10+
- •
autogen-agentchat - •
autogen-ext - •An OpenAI API key
- •Basic familiarity with AutoGen agents and chat messages
- •A terminal and a virtual environment
Install the packages:
pip install -U autogen-agentchat autogen-ext openai
Set your API key:
export OPENAI_API_KEY="your-key-here"
Step-by-Step
- •Start by creating a simple Python function that does one job well. For beginners, keep the tool deterministic and easy to test, like converting dollars to cents or validating a policy number.
def dollars_to_cents(amount: float) -> int:
"""Convert a dollar amount to cents."""
if amount < 0:
raise ValueError("amount must be non-negative")
return int(round(amount * 100))
print(dollars_to_cents(12.34))
- •Wrap that function as an AutoGen tool using the built-in decorator. This gives the model a structured way to call your Python code instead of guessing at the answer.
import asyncio
from autogen_core.tools import FunctionTool
def dollars_to_cents(amount: float) -> int:
"""Convert a dollar amount to cents."""
if amount < 0:
raise ValueError("amount must be non-negative")
return int(round(amount * 100))
tool = FunctionTool(
dollars_to_cents,
description="Convert a dollar amount into integer cents.",
)
- •Create an assistant agent and give it access to the tool. The key point here is that the model can decide when to use the tool based on your prompt and its instructions.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[tool],
system_message="You are a helpful assistant. Use tools when useful.",
)
result = await agent.run(task="Convert $12.34 to cents.")
print(result.messages[-1].content)
if __name__ == "__main__":
asyncio.run(main())
- •Add another tool so you can see how multiple functions fit together. In real projects, this is where you start separating concerns: one tool for lookups, one for formatting, one for validation.
from autogen_core.tools import FunctionTool
def is_valid_policy_number(policy_number: str) -> bool:
"""Return True if the policy number matches a simple format."""
return policy_number.startswith("POL-") and len(policy_number) == 10
policy_tool = FunctionTool(
is_valid_policy_number,
description="Check whether a policy number is valid.",
)
- •Run both tools in the same agent and ask questions that force tool usage. Keep prompts specific so beginners can clearly see when AutoGen calls Python versus when it answers directly.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[tool, policy_tool],
system_message=(
"You are a helpful assistant. "
"Use tools for calculations and validation."
),
)
result = await agent.run(
task=(
"1) Convert $19.99 to cents.\n"
"2) Check whether POL-12345 is valid."
)
)
print(result.messages[-1].content)
if __name__ == "__main__":
asyncio.run(main())
Testing It
Run the script from your terminal and watch for two things: a successful model response and evidence that the tool output influenced the answer. If you want more visibility, log intermediate messages or inspect result.messages instead of printing only the last message.
Test edge cases too. For example, pass -1 into dollars_to_cents or an invalid policy string like ABC123 so you can confirm your tool fails cleanly and predictably.
If the model does not call your tool, tighten the prompt and make the system message more explicit about using tools for calculations or validation. Beginners often assume the model will always choose the tool automatically; in practice, clear instructions matter.
Next Steps
- •Learn how to add structured inputs with Pydantic models so your tools accept typed arguments.
- •Add error handling around tools so exceptions become useful messages instead of broken runs.
- •Move from single-tool examples to multi-agent workflows where one agent calls tools and another reviews results.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit