zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. Techni+02[view] [source] 2023-11-20 05:31:05
>>andsoi+(OP)
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

◧◩
2. chubot+V3[view] [source] 2023-11-20 05:41:06
>>Techni+02
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?

◧◩◪
3. ryanSr+z6[view] [source] 2023-11-20 05:56:06
>>chubot+V3
Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?
◧◩◪◨
4. dragon+Sd[view] [source] 2023-11-20 06:42:22
>>ryanSr+z6
> Certainly no one is suggesting these systems can become "alive",

Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.

The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.

◧◩◪◨⬒
5. idontw+ff[view] [source] 2023-11-20 06:50:53
>>dragon+Sd
How does it actually kill a person? When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?
◧◩◪◨⬒⬓
6. dragon+1h[view] [source] 2023-11-20 07:03:34
>>idontw+ff
> When does it stop existing in boxes that require a continuous source of electricity and can’t survive water or fire?

When someone runs a model in a reasonably durable housing with a battery?

(I'm not big on the AI as destroyer or saviour cult myself, but that particular question doesn't seem like all that big of a refutation of it.)

◧◩◪◨⬒⬓⬔
7. idontw+ui[view] [source] 2023-11-20 07:13:45
>>dragon+1h
But my point is what is it actually doing to reach out and touch someone in the doomsday scenario?
◧◩◪◨⬒⬓⬔⧯
8. LordDr+wl[view] [source] 2023-11-20 07:32:39
>>idontw+ui
I mean, the cliched answer is "when it figures out how to override the nuclear launch process". And while that cliche might have a certain degree of unrealism, it would certainly be possible for a system with access to arbitrary compute power that's specifically trained to impersonate human personas to use social engineering to precipitate WW3.

And even that isn't the easiest scenario if an AI just wants us dead; a smart enough AI could just as easily use send a request to any of the the many labs that will synthesize/print genetic sequences for you and create things that combine into a plague worse than covid. And if it's really smart, it can figure out how to use those same labs to begin producing self-replicating nanomachines (because that's what viruses are) that give it substrate to run on.

Oh, and good luck destroying it when it can copy and shard itself onto every unpatched smarthome device on Earth.

Now, granted, none of these individual scenarios have a high absolute likelihood. That said, even at a 10% (or 0.1%) chance of destroying all life, you should probably at least give it some thought.

◧◩◪◨⬒⬓⬔⧯▣
9. idontw+Vo[view] [source] 2023-11-20 07:55:13
>>LordDr+wl
How can it call one of those labs and place an order for the apocalypse and I can’t right now?

Also about the smart home devices: if a current iPhone can’t run Siri locally then how is a Roomba supposed to run an AGI?

[go to top]