zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. convex+C01[view] [source] 2023-11-18 01:11:18
>>davidb+(OP)
Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.

◧◩
2. convex+ch1[view] [source] 2023-11-18 03:08:44
>>convex+C01
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩◪
3. jojoba+vD1[view] [source] 2023-11-18 05:50:45
>>convex+ch1
The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.
◧◩◪◨
4. lijok+1G1[view] [source] 2023-11-18 06:12:11
>>jojoba+vD1
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
◧◩◪◨⬒
5. Booris+AL1[view] [source] 2023-11-18 07:08:18
>>lijok+1G1
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison

[go to top]