Then things like this happen, and people think "ooh AI is bad, the bubble must burst" when this has nothing to do with that in the first place, and the real issue was that they sent a "fraud/phishing report" rather than a "trademark infringement" report.
Then I also wish that people who knew better, that this really has nothing to do with AI (like, this is obviously not autonomously making decisions any more than a regular program is), to stop blindly parroting and blaming it as a way to get more clicks, support and rage.
(After the previous AI bubble, no-one mentioned the dread term for about 20 years, instead using the safely ultra-broad umbrella term.)
That haphazard branding and parroting is exactly why the bubble needs to burst. Bubbles bursting take out the gritters and rarely actually kills off all the innovation in the scene (it kills a lot, though. I'm not trying to dismiss that).
For example, in my country, we are still dealing with the fallout from a decision made over a decade ago by the Tax Department. They used a poorly designed ML algorithm to screen applicants claiming social benefits for fraudulent activity. This led to several public inquiries and even contributed to the collapse of a government coalition. Tens of thousands of people are still suffering from being wrongly labeled as fraudulent, facing hefty fines and being forced to repay so-called fraudulent benefits.
(Layman here, obviously.)
AI does need to die. Not so much because LLMs are bad, but rather because, like "big data" and "blockchain" and many other buzzwordy tools before it, it is a solution looking for a problem.
Read the Wikipedia article and you’ll probably feel outraged.
Also, here is a blog post[2] warning about the improper use of algorithmic enforcement tools like the one that was used in this scandal.
[1] https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scand...
The legal system already acts that way when the issue is in its own wheelhouse: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us... The lawyers did not escape by just chuckling in amusement, throwing up their hands, and saying "AIs! Amimrite?"
The system is slow and the legal tests haven't happened yet but personally I see no reason to believe that the legal system isn't going to decide that "the AI" never does anything and that "the AI did it!" will provide absolutely zero cover for any action or liability. If anything it'll be negative as hooking an AI directly up to some action and then providing no human oversight will come to be ipso facto negligence.
I actually consider this one of the more subtle reasons this AI bubble is substantially overblown. The idea of this bubble is that AI will just replace humans wholesale, huzzah, cost savings galore! But if companies program things like, say, customer support with AIs, and can then just deploy their wildest fantasies straight into AIs with no concern about humans being in the loop and turning whistleblower or anything, like, making it literally impossible to contact humans, making it literally impossible to get solutions, and so forth, and if customers push these AIs to give false or dangerous solutions, or agree to certain bargains or whathaveyou, and the end result is you trade lots of expensive support calls for a company-ending class-action lawsuit, the utility of buying the AI services to replace your support staff sharply goes down. Not necessarily to zero. Doesn't have to go to zero. It just makes the idea that you're going to replace your support staff with a couple dozen graphics cards a much more incremental advantage rather than a multiplicative advantage, but the bubble is priced like it's hugely multiplicative.
[0] >>42365837
One exception: personal projects. "This is an NES emulator that is built in Rust, and it uses Rust because I wanted to learn Rust" is a perfectly good description of a project (but not a business).
Arguably, in this scenario, learning rust is the "business need" and the NES emulator is the tool :)
But yeah, exactly. A blockchain is, technically, just a content-addressable linked list. A Merkle tree is the same, as a tree. Git's core data structure is a DAG version of this. These things are useful. Yet nobody calls Git "blockchain technology", because what we all care about is Git's value as a version control tool.