CrewAI Tutorial (Python): adding tool use for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-tool-use-for-advanced-developerspython

This tutorial shows how to add real tool use to a CrewAI project in Python so agents can query external systems instead of guessing. You need this when your crew must read files, hit APIs, inspect databases, or call internal services with deterministic outputs.

What You'll Need

  • Python 3.10+
  • crewai
  • crewai-tools
  • An OpenAI API key set as OPENAI_API_KEY
  • Basic CrewAI knowledge: agents, tasks, and crews
  • A terminal with pip and a working virtual environment

Step-by-Step

  1. Start with a clean install and verify the core packages are available. I’m using the built-in CrewAI tool pattern so you can attach tools directly to an agent without extra glue code.
pip install crewai crewai-tools
export OPENAI_API_KEY="your-key-here"
  1. Create a small custom tool for fetching local data. In production, this is where you wrap internal APIs, database clients, or file readers behind a stable interface.
from crewai.tools import BaseTool
from pydantic import BaseModel, Field


class LookupInput(BaseModel):
    query: str = Field(..., description="Search term for the knowledge base")


class KnowledgeBaseTool(BaseTool):
    name: str = "knowledge_base_lookup"
    description: str = "Look up policy information from a local knowledge base"
    args_schema = LookupInput

    def _run(self, query: str) -> str:
        data = {
            "claims": "Claims must be filed within 30 days.",
            "billing": "Billing disputes require a ticket and invoice reference.",
            "fraud": "Fraud cases must be escalated immediately to compliance.",
        }
        return data.get(query.lower(), f"No record found for: {query}")
  1. Define an agent that can use the tool and a task that forces tool-backed reasoning. The important part is attaching the tool list to the agent; CrewAI will decide when to call it based on the task context.
from crewai import Agent, Task, Crew, Process

kb_tool = KnowledgeBaseTool()

support_agent = Agent(
    role="Policy Support Analyst",
    goal="Answer policy questions using approved internal sources",
    backstory="You handle operational policy questions for support teams.",
    tools=[kb_tool],
    verbose=True,
)

task = Task(
    description=(
        "Answer this question using the knowledge base tool only: "
        "'What is the claims filing deadline?'"
    ),
    expected_output="A concise answer with the exact policy wording."
)
  1. Build and run the crew. For advanced setups, keep execution single-agent first so you can confirm tool invocation before adding delegation or multiple specialists.
crew = Crew(
    agents=[support_agent],
    tasks=[task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print(result)
  1. Add a second tool once the first one works. This pattern is what you want in production: one tool for lookup, another for side-effect-free transformation or validation.
from typing import Type


class NormalizeInput(BaseModel):
    text: str = Field(..., description="Text to normalize")


class NormalizeTool(BaseTool):
    name: str = "normalize_text"
    description: str = "Normalize text for downstream processing"
    args_schema: Type[BaseModel] = NormalizeInput

    def _run(self, text: str) -> str:
        return " ".join(text.strip().lower().split())


normalize_tool = NormalizeTool()

ops_agent = Agent(
    role="Operations Analyst",
    goal="Normalize and validate operational input",
    backstory="You prepare clean inputs for downstream workflows.",
    tools=[kb_tool, normalize_tool],
    verbose=True,
)

Testing It

Run the script and watch the verbose output. You should see the agent reasoning about whether it needs a tool call, then invoking knowledge_base_lookup before answering.

If it skips the tool and hallucinates an answer, tighten the task wording so it explicitly requires source-backed output. In practice, I also test with one question that has an exact match and one that returns No record found, because both paths matter.

For production validation, add unit tests around each custom tool’s _run() method first. Then run an integration test that executes a full crew kickoff against mocked dependencies or sandbox APIs.

Next Steps

  • Wrap real internal services in BaseTool classes with strict input schemas
  • Add retry logic and timeouts inside tools before exposing them to agents
  • Learn CrewAI memory and delegation patterns after your tool layer is stable

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides