zlacker

[return to "Emmet Shear statement as Interim CEO of OpenAI"]
1. dgello+77[view] [source] 2023-11-20 09:49:34
>>jk_tec+(OP)
Reading between the lines, what I see is that even the interim CEO is communicating that the board created a massive mess for petty reasons. Hopefully the report will be public.
◧◩
2. ah765+Na[view] [source] 2023-11-20 10:13:09
>>dgello+77
"that the process and communications around Sam’s removal has been handled very badly"

The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.

"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

He knows the reason, it's not safety, but he's not allowed to say what it is.

Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.

◧◩◪
3. PKop+lk[view] [source] 2023-11-20 11:22:46
>>ah765+Na
>it's not safety

Can you explain what is meant by the word safety?

Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?

◧◩◪◨
4. tsimio+Il[view] [source] 2023-11-20 11:30:22
>>PKop+lk
In this context, this is about the idea of AI safety. This can either refer to the more short-term concerns about AI helping to spread misinformation (e.g. ChatGPT being used to churn out massive amounts of fake news) or implicit biases (e.g. "predictive policing" using AI to analyze crime data that ends up incarcerating minorities because of accidental biases in its training set). Or it can refer to the longer term fears about a super-human intelligence that would end up acting against humanity for various reasons, and efforts to create a super-human AI that would have the same moral goals as us (and the fear that a non-safe AGI could be accidentally created).

In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.

[go to top]