So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.
If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.
So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that.
Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.
[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...
The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”
- New feature/product/etc. launch is planned.
- Murati warns Altman that it's not ready yet and there are still security and privacy issues that need to be worked out.
- Altman ignores her warnings, launches anyway.
- Murati blows the whistle on him to the board, tells them that he ordered the launch over her objections.
- Data breach happens. Altman attempts to cover it up. Murati blows the whistle again.
- Board fires Altman, gives Murati the job as it's clear from her whistleblowing that she has the ethics for it at least.
Again, completely hypothetical scenario, but it's one possible explanation for how this could happen.
Honest question: do execs or companies in general ever suffer consequences for data breaches? Seems like basically no one cares about this stuff.
he says fuck them and their money, it's not ready yet, here's a bunch of other things that will make people go wooooow.
she's not happy he does that because future. convinces the board of money and investors.
the board shits on humanity and goes for money and investors.
Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.
Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?
Not sure if these are all the same sources I read a while ago, but E.G.:
https://www.theverge.com/features/23764584/ai-artificial-int...
https://www.marketplace.org/shows/marketplace-tech/human-lab...
https://www.technologyreview.com/2022/04/20/1050392/ai-indus...
https://time.com/6247678/openai-chatgpt-kenya-workers/
https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...
https://www.noemamag.com/the-exploited-labor-behind-artifici...
https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...
Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...
Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...
I ain't no Captain Ahab baby.