zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. Techni+02[view] [source] 2023-11-20 05:31:05
>>andsoi+(OP)
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

◧◩
2. chubot+V3[view] [source] 2023-11-20 05:41:06
>>Techni+02
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?

◧◩◪
3. ryanSr+z6[view] [source] 2023-11-20 05:56:06
>>chubot+V3
Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?
◧◩◪◨
4. dragon+Sd[view] [source] 2023-11-20 06:42:22
>>ryanSr+z6
> Certainly no one is suggesting these systems can become "alive",

Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.

The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.

◧◩◪◨⬒
5. idontw+ff[view] [source] 2023-11-20 06:50:53
>>dragon+Sd
How does it actually kill a person? When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?
◧◩◪◨⬒⬓
6. upward+6j[view] [source] 2023-11-20 07:16:46
>>idontw+ff
One route is if AI (not through malice but simply through incompetence) plays a part in a terrorist plan to trick the US and China or US and Russia into fighting an unwanted nuclear war. A working group I’m a part of, DISARM:SIMC4, has a lot of papers about this here: https://simc4.org
◧◩◪◨⬒⬓⬔
7. hurrye+ep[view] [source] 2023-11-20 07:57:21
>>upward+6j
Since you work on this, do you think leaders will wait until confirmation of actual nuclear detonations, maybe on TV, before believing that a massive attack was launched?
[go to top]