If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?
Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.
Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.
No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
Surely that's what you need for safety?
The default consequence of AGI's arrival is doom. Aligning a super intelligence with our desires is a problem that no one has solved yet.
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
----
Listen to Dwarkesh Podcast with Eliezer or Carl Shulman to know more about this.
It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.
So less like an alien invasion.
And more like a pandemic at the speed of light.
If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.
I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.
I'm assuming you meant "aren't" here.
> That would imply there was some arbitrary physical limit to intelligence
All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.
Also there's no guarantee the amount of raw computation is going to increase quickly.
In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.
I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.