If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?
Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.
Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.
No idea what the future holds for any of the players here. Reality truly is stranger than fiction.
Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?
Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.
Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.
Surely that's what you need for safety?
No, that very much is the fear. They believe that by training AI on all of the things that it takes to make AI, at a certain level of sophistication, the AI can rapidly and continually improve itself until it becomes a superintelligence.
When I say alive, I mean it's like something to be that thing. The lights are on. It has subjective experience.
It seems many are defining ASI as just a really fast self learning computer. And while sure, given the wrong type of access and motive, that could be dangerous. But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.
> But it isn't anymore dangerous than any other faulty software that has access to sensitive systems.
Seems to me that can be unboundedly dangerous? Like, I don't see you making an argument here that there's a limit to what kind of dangerous that class entails.