zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. Techni+02[view] [source] 2023-11-20 05:31:05
>>andsoi+(OP)
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

◧◩
2. chubot+V3[view] [source] 2023-11-20 05:41:06
>>Techni+02
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?

◧◩◪
3. ryanSr+z6[view] [source] 2023-11-20 05:56:06
>>chubot+V3
Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?
◧◩◪◨
4. cornel+hh[view] [source] 2023-11-20 07:05:27
>>ryanSr+z6
Smart people like Ilya really are worried about extinction, not piddling near-term stuff like job loss or some chat app saying some stuff that will hurt someone's feelings.

The worry is not necessarily that the systems become "alive", though, we are already bad enough ourselves as a species in terms of motivation so machines don't need to supply the murderous intent: at any given moment there are at least thousands if not millions of people on the planet that would love nothing more than be able to push a button an murder millions of other people in some outgroup. That's very obvious if you pay even a little bit of attention to any of the Israel/Palestine hatred going back and forth lately. [There are probably at least hundreds to thousands that are insane enough to want to destroy all of humanity if they could, for that matter...] If AI becomes powerful enough to make it easy for a small group to kill large numbers of people that they hate, we are probably all going to end up dead, because almost all of us belong to a group that someone wants to exterminate.

Killing people isn't a super difficult problem, so I don't think you really even need AGI to get to that sort of an outcome, TBH, which is why I think a lot of the worry is misplaced. I think the sort of control systems that we could pretty easily build with the LLMs of today could very competently execute genocides if they were paired with suitably advanced robotics, it's the latter that is lacking. But in any case, the concern is that having even stronger AI, especially once it reliably surpasses us in every way, makes it even easier to imagine an effectively unstoppable extermination campaign that runs on its own and couldn't be stopped even by the people who started it up.

I personally think that stronger AI is also the solution and we're already too far down the cat-and-mouse rabbithole to pause the game (which some e/acc people believe as the main reason they want to push forward faster and make sure a good AI is the first one to really achieve full domination), but that's a different discussion.

[go to top]