zlacker

[parent] [thread] 2 comments
1. mcv+(OP)[view] [source] 2023-11-20 08:22:26
Is there a good article, or does anyone have the slightest inkling, about what the real conflict here is? There's a lot of articles about the symptoms, but what's the core issue here?

The board claims Altman lied. Is that it? About what? Did he consistently misinform the board about a ton of different things? Or about one really important issue? Or is this just an excuse disguising the actual issues?

I notice a lot of people in the comments talking about Altman being more about profit than about OpenAI's original mission of developing safe, beneficial AGI. Is Altman threatening that mission or disagreeing with it? It would be really interesting if this was the real issue, but if it was, I can't believe it came out of nowhere like that, and I would expect the board to have a new CEO lined up already and not be fumbling for a new CEO and go for one with no particular AI or ethics background.

Sutskever gets mentioned as the primary force behind firing Altman. Is this a blatant power grab? Or is Sutskever known to have strong opinions about that mission of beneficial AGI?

I feel a bit like I'm expected to divine the nature of an elephant by only feeling a trunk and an ear.

replies(1): >>sainez+m9
2. sainez+m9[view] [source] 2023-11-20 09:08:14
>>mcv+(OP)
I'm not sure what more information people need. The original announcement was pretty clear: https://openai.com/blog/openai-announces-leadership-transiti....

Specifically:

> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.

> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.

People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.

replies(1): >>mcv+Vj
◧◩
3. mcv+Vj[view] [source] [discussion] 2023-11-20 10:12:53
>>sainez+m9
That original announcement doesn't make it nearly as explicit as you're making it. It doesn't say what he lied about, and it doesn't say he's not on board with the mission.

Sounds like firing was done to better serve the original mission, and is therefore probably a good thing. Though the way it's happening does come across as sloppy and panicky to me. Especially since they already replaced their first replacement CEO.

Edit: turns out Wikipedia already has a pretty good write up about the situation:

> "Sutskever is one of the six board members of the non-profit entity which controls OpenAI.[7] According to Sam Altman and Greg Brockman, Sutskever was the primary driver behind the November 2023 board meeting that led to Altman's firing and Brockman's resignation from OpenAI.[30][31] The Information reported that the firing in part resulted from a conflict over the extent to which the company should commit to AI safety.[32] In a company all-hands shortly after the board meeting, Sutskever stated that firing Altman was "the board doing its duty."[33] The firing of Altman and resignation of Brockman led to resignation of 3 senior researchers from OpenAI."

(from https://en.wikipedia.org/wiki/Ilya_Sutskever)

[go to top]