zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. swatco+69[view] [source] 2023-11-22 07:00:22
>>shubha+B7
> If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

FWIW, that's called zealotry and people do a lot of dramatic, disruptive things in the name of it. It may be rightly aimed and save the world (or whatever you care about), but it's more often a signal to really reflect on whether you, individually, have really found yourself at the make-or-break nexus of human existence. The answer seems to be "no" most of the time.

◧◩◪
3. mlyle+La[view] [source] 2023-11-22 07:11:27
>>swatco+69
Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.
◧◩◪◨
4. Random+Wb[view] [source] 2023-11-22 07:18:05
>>mlyle+La
Probably not a bad heuristic: unless proven, don't assume existential risk.
◧◩◪◨⬒
5. _Alger+9i[view] [source] 2023-11-22 08:06:07
>>Random+Wb
Existential risks are usually proven by the subject being extinct at which point no action can be taken to prevent it.

Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).

◧◩◪◨⬒⬓
6. Random+2k[view] [source] 2023-11-22 08:20:04
>>_Alger+9i
Are we extinct? No. Could a large impact kill us all? Yes.

Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

◧◩◪◨⬒⬓⬔
7. _Alger+Wk[view] [source] 2023-11-22 08:27:53
>>Random+2k
>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.

>Expected value and probability have no place in these discussions.

I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.

◧◩◪◨⬒⬓⬔⧯
8. Random+cm[view] [source] 2023-11-22 08:36:35
>>_Alger+Wk
I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.

People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.

◧◩◪◨⬒⬓⬔⧯▣
9. mlyle+sj2[view] [source] 2023-11-22 19:45:13
>>Random+cm
Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.

OTOH, The precautionary principle is too cautious.

There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.

This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.

◧◩◪◨⬒⬓⬔⧯▣▦
10. Random+9K2[view] [source] 2023-11-22 22:02:57
>>mlyle+sj2
Why does it make sense? It's a hypothetical risk with poorly defined outlines.
◧◩◪◨⬒⬓⬔⧯▣▦▧
11. mlyle+HO2[view] [source] 2023-11-22 22:28:20
>>Random+9K2
There's a big family of risks here.

The simplest is pretty easy to articulate and weigh.

If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.

The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.

Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.

For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?

And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. Random+2S2[view] [source] 2023-11-22 22:47:55
>>mlyle+HO2
I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.

Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?

[go to top]