zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. valine+v3[view] [source] 2023-11-20 05:39:23
>>andsoi+(OP)
Not a word from Ilya. I can’t wrap my mind around his motivation. Did he really fire Sam over “AI safety” concerns? How is that remotely rational.
◧◩
2. ah765+78[view] [source] 2023-11-20 06:06:07
>>valine+v3
It might be because of AI safety, but I think it's more likely because Sam was executing plans without informing the board, such as making deals with outside companies, allocating funds to profit-oriented products and making announcements about them, and so on. Perhaps he also wanted to reduce investment in the alignment research that Ilya considered important. Hopefully we'll learn the truth soon, though I suspect that it involves confidential deals with other companies and that's why we haven't heard anything.
◧◩◪
3. ipaddr+7j[view] [source] 2023-11-20 07:16:51
>>ah765+78
It's to do with a tribe in openAI that believes ai will take over the world in the next 10 years so we need to spend much of our efforts towards that goal. What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.

Sam has been agreeing with this group and using this as the reason to go commercial to provide funding for that goal. The problem is these new products are coming too fast and taking resources which affects the resources they can use for safety training.

This group never wanted to release chatGPT but were forced to because a rival company made up of ex openAI employees were going to release their own version. To the safety group things have been getting worse since that release.

Sam is smart enough to use the safety group's fear against them. They finally clued in.

OpenAI never wanted to give us chatGPT. Their hands were forced by a rival and Sam and the board made a decision that brought in the next breakthrough. From that point things snowballed. Sam knew he needed to run before bigger players moved in. It became too obvious after devday that the safety team would never be able to catch up and they pulled the breaks.

OpenAI's vision of a safe AI has turned into a vision of human censorship rather than protecting society from a rogue AI with the power to harm.

◧◩◪◨
4. crooke+Jw2[view] [source] 2023-11-20 18:40:52
>>ipaddr+7j
> What that translates to is strong prompt censorship and automated tools to ban those who keep asking things we don't want you to ask.

...which several subreddits dedicated to LLM porn or trolling could tell you is both mostly pointless and also blocks a ton of stuff you could find on any high school nerd's bookshelf as "unsafe".

[go to top]