zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. ademeu+wp[view] [source] 2023-11-17 22:13:10
>>davidb+(OP)
Sam implied OpenAI had a major breakthrough a few weeks ago in a panel yesterday:

"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".

https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s

This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.

Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.

◧◩
2. mi3law+9q[view] [source] 2023-11-17 22:16:38
>>ademeu+wp
On the contrary, the video you linked to is likely to be part of the lie that ousted Altman.

He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )

To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.

◧◩◪
3. ademeu+Bv[view] [source] 2023-11-17 22:40:55
>>mi3law+9q
Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.

More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp

Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.

Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?

As I said, I hope you're right, because the alternative is a lot scarier.

◧◩◪◨
4. mi3law+DM1[view] [source] 2023-11-18 07:17:57
>>ademeu+Bv
I think my point is different than what you're breaking down here.

The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.

But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.

Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.

◧◩◪◨⬒
5. dragon+dP1[view] [source] 2023-11-18 07:44:38
>>mi3law+DM1
Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.

This has something of a character of a doctrinal dispute among true believers in a millennial cult

[go to top]