zlacker

[return to "Ilya Sutskever "at the center" of Altman firing?"]
1. rcpt+a1[view] [source] 2023-11-18 02:50:39
>>apsec1+(OP)
Wait this is just a corporate turf war? That's boring I already have those at work
◧◩
2. reduce+s1[view] [source] 2023-11-18 02:52:52
>>rcpt+a1
No, this move is so drastic because Ilya, the chief scientist behind OpenAI, thinks Sam and Greg are pushing so hard on AGI capabilities, ahead of alignment with humanity, that it threatens everyone. 2/3 of the other board members agreed.

Don’t shoot the messenger. No one else has given you a plausible reason why Sama was abruptly fired, and this is what a reporter said of Ilya:

‘He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.’

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

◧◩◪
3. morale+22[view] [source] 2023-11-18 02:56:38
>>reduce+s1
Bull. Shit.

OpenAI and its people are there to maximize shareholder value.

This is the same company that went from "non-profit" to "jk, lol, we are actually for-profit now". I still think that move was not even legal but rules for thee not for me.

They ousted sama because it was bad for business. Why? We may never know, or we may know next week, who knows? Literally.

◧◩◪◨
4. sainez+a5[view] [source] 2023-11-18 03:18:58
>>morale+22
It seems you are conflating OpenAI the non-profit, with OpenAI the LLC: https://openai.com/our-structure
◧◩◪◨⬒
5. morale+N6[view] [source] 2023-11-18 03:29:59
>>sainez+a5
No, that's the whole point, "AI for the benefit of humanity" and whatnot turned out to be a marketing strategy (if you could call it that).
◧◩◪◨⬒⬓
6. lucubr+Qb[view] [source] 2023-11-18 04:09:44
>>morale+N6
That is what Ilya Sutskever and the board of the non-profit have effectively accused Sam Altman of in firing him, yes.
◧◩◪◨⬒⬓⬔
7. morale+7d[view] [source] 2023-11-18 04:19:56
>>lucubr+Qb
???

Source?

◧◩◪◨⬒⬓⬔⧯
8. lucubr+Gf[view] [source] 2023-11-18 04:38:27
>>morale+7d
Kara's reporting on motive:

https://twitter.com/karaswisher/status/1725678074333635028?t...

Kara's reporting on who is involved: https://twitter.com/karaswisher/status/1725702501435941294?t...

Confirmation of a lot of Kara's reporting by Ilya himself: https://twitter.com/karaswisher/status/1725717129318560075?t...

Ilya felt that Sam was taking the company too far in the direction of profit seeking, more than was necessary just to get the resources to build AGI, and every bit of selling out gives more pressure on OpenAI to produce revenue and work for profit later, and risks AGI being controlled by a small powerful group instead of everyone. After OpenAI Dev Day, evidently the board agreed with him - I suspect Dev Day is the source of the board's accusation that Sam did not share with complete candour. Ilya may also care more about AGI safety specifically than Sam does - that's currently unclear, but it would not surprise me at all based on how they have both spoken in interviews. What is completely clear is that Ilya felt Sam was straying so far from the mission of the non-profit, safe AGI that benefits all of humanity, that the board was compelled to act to preserve the non-profit's mission. Them expelling him and re-affirming their commitment to the OpenAI charter is effectively accusing him of selling out.

For context, you can read their charter here: https://openai.com/charter and mentally contrast that with the atmosphere of Sam Altman on Dev Day. Particularly this part of their charter: "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

[go to top]