zlacker

[parent] [thread] 6 comments
1. _Alger+(OP)[view] [source] 2023-11-22 08:27:53
>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.

How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.

>Expected value and probability have no place in these discussions.

I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.

replies(1): >>Random+g1
2. Random+g1[view] [source] 2023-11-22 08:36:35
>>_Alger+(OP)
I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.

People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.

replies(1): >>mlyle+wY1
◧◩
3. mlyle+wY1[view] [source] [discussion] 2023-11-22 19:45:13
>>Random+g1
Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.

OTOH, The precautionary principle is too cautious.

There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.

This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.

replies(1): >>Random+dp2
◧◩◪
4. Random+dp2[view] [source] [discussion] 2023-11-22 22:02:57
>>mlyle+wY1
Why does it make sense? It's a hypothetical risk with poorly defined outlines.
replies(1): >>mlyle+Lt2
◧◩◪◨
5. mlyle+Lt2[view] [source] [discussion] 2023-11-22 22:28:20
>>Random+dp2
There's a big family of risks here.

The simplest is pretty easy to articulate and weigh.

If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.

The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.

Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.

For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?

And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?

replies(1): >>Random+6x2
◧◩◪◨⬒
6. Random+6x2[view] [source] [discussion] 2023-11-22 22:47:55
>>mlyle+Lt2
I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.

Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?

replies(1): >>mlyle+ty2
◧◩◪◨⬒⬓
7. mlyle+ty2[view] [source] [discussion] 2023-11-22 22:56:35
>>Random+6x2
> It needs the existence of something that does not exist,

Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.

> it needs a certain socio-economic response and so forth.

Absent large interventions, this will happen.

> Are children equally demoralized about additions

Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.

> Is there a way to counter the demoralization?

We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.

OTOH, these arguments become much less true if cheap AGI shows up.

[go to top]