Is this serious?
There are two kinds of bots.
There's legits ones that the site owners will generally find to provide a positive tradeoff. These bots identify themselves by the user-agent, the requests come from a predictable set of IPs, and the they obey the robots.txt. Think most crawlers for search engines (though not Brave's), bots that handle link previews for apps like WhatsApp, even RSS readers!
Then there's the abusive ones. These are usually hitting resources that are expensive and contain valuable information. The will not obey robots.txt. They'll run residential IP botnets to avoid IP blocks. They'll make their bot as similar to legit traffic as possible, the user-agent is literally the first thing they'd look at changing. They'll hire mechanical turks to create fake accounts to look like signed in users.
Now, it's pretty obvious why the author's methodology for supporting the statement is so silly. First, it was circular. They identified bots by user-agent, and then declared that since there were bots that had a distinguishing user-agent, the other traffic can't have been bots. The other is that they looked at the logs of a server that doesn't contain any data that somebody would be scraping maliciously. Ocean's 11 will do a heist of a casino, not a corner store. Likewise the professional bot operations are scraping valuable information from people trying to actively defend against it, not your blog.