Is this serious?
Speaking as someone who works at Google, but not on anything at all related to ads, browsers, or ad spam detection, I only wish that the attackers who (try to) make a living out of scamming Google and its advertisers out of money were as incompetent as the author of this article appears to be.
There are two kinds of bots.
There's legits ones that the site owners will generally find to provide a positive tradeoff. These bots identify themselves by the user-agent, the requests come from a predictable set of IPs, and the they obey the robots.txt. Think most crawlers for search engines (though not Brave's), bots that handle link previews for apps like WhatsApp, even RSS readers!
Then there's the abusive ones. These are usually hitting resources that are expensive and contain valuable information. The will not obey robots.txt. They'll run residential IP botnets to avoid IP blocks. They'll make their bot as similar to legit traffic as possible, the user-agent is literally the first thing they'd look at changing. They'll hire mechanical turks to create fake accounts to look like signed in users.
Now, it's pretty obvious why the author's methodology for supporting the statement is so silly. First, it was circular. They identified bots by user-agent, and then declared that since there were bots that had a distinguishing user-agent, the other traffic can't have been bots. The other is that they looked at the logs of a server that doesn't contain any data that somebody would be scraping maliciously. Ocean's 11 will do a heist of a casino, not a corner store. Likewise the professional bot operations are scraping valuable information from people trying to actively defend against it, not your blog.
i also found this odd. the target matters.
when i did more sophisticated dual tracking (js interaction and http logs), the js interaction detected 2x-3x the bots vs pure logs from just UA strings and ip ranges.