For example, in my country, we are still dealing with the fallout from a decision made over a decade ago by the Tax Department. They used a poorly designed ML algorithm to screen applicants claiming social benefits for fraudulent activity. This led to several public inquiries and even contributed to the collapse of a government coalition. Tens of thousands of people are still suffering from being wrongly labeled as fraudulent, facing hefty fines and being forced to repay so-called fraudulent benefits.
(Layman here, obviously.)
Read the Wikipedia article and you’ll probably feel outraged.
Also, here is a blog post[2] warning about the improper use of algorithmic enforcement tools like the one that was used in this scandal.
[1] https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scand...
The legal system already acts that way when the issue is in its own wheelhouse: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us... The lawyers did not escape by just chuckling in amusement, throwing up their hands, and saying "AIs! Amimrite?"
The system is slow and the legal tests haven't happened yet but personally I see no reason to believe that the legal system isn't going to decide that "the AI" never does anything and that "the AI did it!" will provide absolutely zero cover for any action or liability. If anything it'll be negative as hooking an AI directly up to some action and then providing no human oversight will come to be ipso facto negligence.
I actually consider this one of the more subtle reasons this AI bubble is substantially overblown. The idea of this bubble is that AI will just replace humans wholesale, huzzah, cost savings galore! But if companies program things like, say, customer support with AIs, and can then just deploy their wildest fantasies straight into AIs with no concern about humans being in the loop and turning whistleblower or anything, like, making it literally impossible to contact humans, making it literally impossible to get solutions, and so forth, and if customers push these AIs to give false or dangerous solutions, or agree to certain bargains or whathaveyou, and the end result is you trade lots of expensive support calls for a company-ending class-action lawsuit, the utility of buying the AI services to replace your support staff sharply goes down. Not necessarily to zero. Doesn't have to go to zero. It just makes the idea that you're going to replace your support staff with a couple dozen graphics cards a much more incremental advantage rather than a multiplicative advantage, but the bubble is priced like it's hugely multiplicative.
[0] >>42365837