zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. Techni+02[view] [source] 2023-11-20 05:31:05
>>andsoi+(OP)
I still cannot process what’s happened to one of the most prominent and hyped companies of the past year in just one weekend.

If it’s true that Altman won’t return to OpenAI (or alternatively: that the current board won’t step down) then where does that leave OpenAI? Microsoft can’t be happy, as evidenced by reporting that Nadella was acting as mediator to bring him back. Does OpenAI survive this?

Will be super interesting when all the details come out regarding the board’s decision making. I’m especially curious how the (former) CEO of Twitch gets nominated as interim CEO.

Finally, if Altman goes his own way, it’s clear the fervent support he’s getting will lead to massive funding. Combined with the reporting that he’s trying to create his own AI chips with Middle East funding, Altman has big ambitions for being fully self reliant to own the stack completely.

No idea what the future holds for any of the players here. Reality truly is stranger than fiction.

◧◩
2. chubot+V3[view] [source] 2023-11-20 05:41:06
>>Techni+02
What I think is funny is how the whole "we're just doing this to make sure AI is safe" meme breaks down, if you have OpenAI, Anthropic, and Altman AI all competing, which seems likely now.

Do you really need all 3? Is each one going to claim that they're the only ones who can develop AGI safely?

Since Sam left, now OpenAI is unsafe? But I thought they were the safe ones, and he was being reckless.

Or is Sam just going to abandon the pretense, competing Google- and Microsoft-style? e.g. doing placement deals, attracting eyeballs, and crushing the competition.

Surely that's what you need for safety?

◧◩◪
3. ryanSr+z6[view] [source] 2023-11-20 05:56:06
>>chubot+V3
Can someone explain to me what they mean by "safe" AGI? I've looked in many places and everyone is extremely vague. Certainly no one is suggesting these systems can become "alive", so what exactly are we trying to remain safe from? Job loss?
◧◩◪◨
4. dragon+Sd[view] [source] 2023-11-20 06:42:22
>>ryanSr+z6
> Certainly no one is suggesting these systems can become "alive",

Lots of people have been publicly suggesting that, and that, if not properly aligned, it poses an existential risk to human civilization; that group includes pretty much the entire founding team of OpenAI, including Altman.

The perception of that risk as the downside, as well as the perception that on the other side there is the promise of almost unlimited upside for humanity from properly aligned AI, is pretty much the entire motivation for the OpenAI nonprofit.

◧◩◪◨⬒
5. mlindn+ak[view] [source] 2023-11-20 07:24:04
>>dragon+Sd
What does "properly aligned" even mean? Democracies even with countries don't have alignment, let alone democracies across the world. They're a complete mess of many conflicting and contradictory stances and opinions.

This sounds, to me, like the company leadership want the ability to do some sort of picking of winners and losers, bypassing the electorate.

◧◩◪◨⬒⬓
6. krisof+Mw[view] [source] 2023-11-20 08:29:28
>>mlindn+ak
> What does "properly aligned" even mean?

You know those stories where someone makes a pact with the devil/djin/other wish granting entity, and the entity does one interpretation of what was wished, but since it is not what the wisher intended it all goes terribly wrong? The idea of alignment is to make the djin which not only can grant wishes, but it does them according to the unstated intention of the wisher.

You might have heard the story of the paper clip maximiser. The leadership of the paperclip factory buys one of those fancy new AI agents and asks it to maximise paperclip production.

What a not-well aligned AI might do: Reach out through the internet to a drug cartel’s communication nodes. Hack the communications and take over the operation. Optimise the drug traficking operations to gain more profit. Divert the funds to manufacture weapons for multiple competing factions on multiple crisis points on Earth. Use the factions against each other. Divert the funds and the weapons to protect a rapidly expanding paperclip factory. Manipulate and blackmail world leaders into inaction. If the original leaders of the paperclip factory try to stop the AI eliminate them, since that is the way to maximise paper clip production. And this is just the begining.

What a well alligned AI would do: Fine tune the paperclip manufacturing machinery to eliminate rejects. Reorganise the factory layout to optimise logistics. Run a succesfull advertising campaign which leads to a 130% increase in sales. (Because clearly this is what the factory owner intended it to do. Altough they did a poor job of expressing their wishes.)

[go to top]