zlacker

[return to "Greg Brockman quits OpenAI"]
1. johnwh+c5[view] [source] 2023-11-18 00:31:48
>>nickru+(OP)
Edit: I called it

https://twitter.com/karaswisher/status/1725682088639119857

nothing to do with dishonesty. That’s just the official reason.

———-

I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.

Also interesting that Sutskever tweeted a month and a half ago

https://twitter.com/ilyasut/status/1707752576077176907

The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.

Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.

I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.

PS most engineers, like myself, are replaceable. Ilya is probably not.

◧◩
2. foobie+8n[view] [source] 2023-11-18 02:16:32
>>johnwh+c5
I think you're completely backward. A board doesn't do that unless they absolutely have to.

Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.

◧◩◪
3. apstls+pG[view] [source] 2023-11-18 04:44:36
>>foobie+8n
The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.
◧◩◪◨
4. foobie+4V[view] [source] 2023-11-18 06:42:44
>>apstls+pG
Generalizing is how we reason, and having been on boards and worked with them closely, I can straight up tell you that's not how it works.

In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.

◧◩◪◨⬒
5. apstls+WM2[view] [source] 2023-11-18 19:30:38
>>foobie+4V
I am also not a stranger to board positions. However, I have never been on the board of a non-profit that is developing technology with genuinely deep, and as-of-now unknown, implications for the status quo of the global economy and - at least as the OpenAI board clearly believes - the literal future and safety of humanity. I haven’t been on a board where a semi-idealist engineer board member has played a (if not _the_) pivotal role in arguably the most significant technical development in recent decades, and who maintains ideals and opinions completely orthogonal to the CEO’s.

Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.

[go to top]