zlacker

[parent] [thread] 5 comments
1. reveli+(OP)[view] [source] 2023-07-06 13:04:21
Pull the plug is meant literally. As in, turn off the power to the AI. Carbon based fuels let alone cocaine don't have off switches. The situation just isn't analogous at all.
replies(1): >>ben_w+75
2. ben_w+75[view] [source] 2023-07-06 13:29:14
>>reveli+(OP)
I assumed literally, and yet the argument applies: we have not been able to stop those things even when using guns to shoot people doing them. The same pressures that keep people growing the plants, processing them, transporting it, selling it, buying it, consuming it, there are many things a system — intelligent or otherwise — can motivate people to keep the lights on.

There were four reactors in Chernobyl plant, the exploding one was 1986, the others were shut down in 1991, 1996, and 2000.

There's no plausible way to guess at the speed of change from a misaligned AI, can you be confident that 14 years isn't enough time to cause problems?

replies(2): >>reveli+Kk1 >>SirMas+b7f
◧◩
3. reveli+Kk1[view] [source] [discussion] 2023-07-06 18:06:02
>>ben_w+75
I mean, as pointed out by a sibling comment, the reason it's so hard to shut those things down is that they benefit a lot of people and there's huge organic demand. Even the morality is hotly debated, there's no absolute consensus on the badness of those things.

Whereas, an AI that tries to kill everyone or take over the world or something, that seems pretty explicitly bad news and everyone would be united in stopping it. To work around that, you have to significantly complicate the AI doom scenario to be one in which a large number of people think the AI is on their side and bringing about a utopia but it's actually ending the world, or something like that. But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives. More subtly the people advocating for clearly civilization-destroying moves like banning all fossil fuels or net zero by 2030, for example, also think they're fighting on the side of the angels.

So the only kind of AI doom scenario I find credible is one in which it manages to trick lots of powerful people into doing something stupid and self-destructive using clever sounding words. But it's hard to get excited about this scenario because, eh, we already have that problem x100, except the misaligned intelligences are called academics.

replies(1): >>ben_w+G74
◧◩◪
4. ben_w+G74[view] [source] [discussion] 2023-07-07 12:54:50
>>reveli+Kk1
> mean, as pointed out by a sibling comment, the reason it's so hard to shut those things down is that they benefit a lot of people and there's huge organic demand. Even the morality is hotly debated, there's no absolute consensus on the badness of those things

And mine is that this can also be true of a misaligned AI.

It doesn't have to be like Terminator, it can be slowly doing something we like and where we overlook the downsides until it's too late.

Doesn't matter if that's "cure cancer" but the cure has a worse than cancer side effect that only manifests 10 years later, or if it's a mere design for a fusion reactor where we have to build it ourselves and that leads to weapons proliferation, or if it's A/B testing the design for a social media website to make it more engaging and it gets so engaging that people choose not to hook up IRL and start families.

> But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives.

Indeed.

I would agree that this is both more likely and less costly than "everyone dies".

But I'd still say it's really bad and we should try to figure out in advance how to minimise this outcome.

> except the misaligned intelligences are called academics

Well, that's novel; normally at this point I see people saying "corporations", and very rarely "governments".

Not seen academics get stick before, except in history books.

replies(1): >>reveli+8x7
◧◩◪◨
5. reveli+8x7[view] [source] [discussion] 2023-07-08 13:15:52
>>ben_w+G74
> But I'd still say it's really bad and we should try to figure out in advance how to minimise this outcome.

For sure. But I don't see what's AI specific about it. If the AI doom scenario is a super smart AI tricking people into doing self destructive things by using clever words, then everything you need to do to vaccinate people against that is the same as if it was humans doing the tricking. Teaching critical thinking, self reliance, to judge arguments on merit and not on surface level attributes like complexity of language or titles of the speakers. All these are things our society objectively sucks at today, and we have a ruling class - including many of the sorts of people who work at AI companies - who are hellbent on attacking these healthy mental habits, and people who engage in them!

> Not seen academics get stick before, except in history books.

For academics you could also read intellectuals. Marx wasn't an academic but he very much wanted to be, if he lived in today's world he'd certainly be one of the most famous academics.

I'm of the view that corporations are very tame compared to the damage caused by runaway academia. It wasn't corporations that locked me in my apartment for months at a time on the back of pseudoscientific modelling and lies about vaccines. It wasn't even politicians really. It was governments doing what they were told by the supposedly intellectually superior academic class. And it isn't corporations trying to get rid of cheap energy and travel. And it's not governments convincing people that having children is immoral because of climate change. All these things are from academics, primarily in universities but also those who work inside government agencies.

When I look at the major threats to my way of life today, academic pseudo-science sits clearly at number 1 by a mile. To the extent corporations and governments are a threat, it's because they blindly trust academics. If you replace Professor of Whateverology at Harvard with ChatGPT, what changes? The underlying sources of mental and cultural weakness are the same.

◧◩
6. SirMas+b7f[view] [source] [discussion] 2023-07-10 21:29:07
>>ben_w+75
"we have not been able to stop those things even when using guns to shoot people doing them."

I assume we have not been able to stop people from creating and using carbon-based energy because a LOT of people still want to create and use them.

I don't think a LOT of people will want to keep an AI system running that is essentially wiping out humans.

[go to top]