"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".
https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s
This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.
Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.
He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )
To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.
More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp
Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.
Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?
As I said, I hope you're right, because the alternative is a lot scarier.
As I said, while I do have a mostly positive opinion of Sam Altman (I disagree with him on certain things but I and trust him a lot more than the vast majority of tech CEOs and politicians and I'd rather he be in the room when true superhuman intelligence is created than them), I hope this has nothing to do with AGI and it's "just" a personal scandal.
https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...
I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?
The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.
But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.
Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.
This has something of a character of a doctrinal dispute among true believers in a millennial cult
That's against a popular sentiment about the upcoming "breakthrough", but also most probable given the characteristics of the approach they took.
They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.
it's honestly sad when people who have clearly not use gpt4 would call it a parroting machine. that is incredibly ignorant.