zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. skygaz+R1[view] [source] 2023-11-18 23:01:16
>>medler+(OP)
Man, the board already looked reckless and incompetent, but this solidifies the appearance. You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius. But when you immediately backtrack, we’ll know you were a fool all along.
◧◩
2. hn_thr+17[view] [source] 2023-11-18 23:24:32
>>skygaz+R1
Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.

To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

◧◩◪
3. 015a+xg[view] [source] 2023-11-19 00:11:46
>>hn_thr+17
Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.
◧◩◪◨
4. hn_thr+VM[view] [source] 2023-11-19 03:44:25
>>015a+xg
> These are absolutely signs of weakness in leadership.

The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.

Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.

I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.

[go to top]