It's baffling how many people in previous threads thought a company that gets most of its money from enterprise/business clients, will burn all their reputation by surreptitiously using client data to train their AI.
> Zoom has agreed to pay $85 million to settle claims that it lied about offering end-to-end encryption and gave user data to Facebook and Google without the consent of users. The settlement between Zoom and the filers of a class-action lawsuit also covers security problems [0]
> Mac update nukes dangerous webserver installed by Zoom [1]
> The 'S' in Zoom, Stands for Security - uncovering (local) security flaws in Zoom's macOS client [2]
[0] https://arstechnica.com/tech-policy/2021/08/zoom-to-pay-85m-...
[1] https://arstechnica.com/information-technology/2019/07/silen...
(Even if revenue was much higher. Revenue doesn't tell you anything about how well a company can take a financial hit)
If the specific misconduct they got caught for netted them $x, and they got fined for $5x, who cares how much % of their global revenue is? That specific crime was still a net negative for them. I'm not sure why conglomerates should be punished more harshly just because they have more revenue overall.
As for "who cares about %": every one who understands that fines that cost a company nothing, do nothing, all they say is "it'll cost you a trivial amount more to do this", turning what should be an instrument to rein in companies into simple monetary transaction that just goes on the books as an entirely expected and affordable expense.
It should be a crime, and they should have been found guilty in court over that, and the fine should be such that no matter your company's size, you can't risk running afoul of the law repeatedly. But it absolutely isn't.