zlacker

[parent] [thread] 1 comments
1. NoMore+(OP)[view] [source] 2023-07-05 21:53:42
> What's harder to imagine is how just being smarter correlates to extinction levels of arbitrary power.

That's not even slightly difficult. Put two and two together here. No one can tell me before they flip the switch whether the new AI will be saintly, or Hannibal Lecter. Both of these personalities exist in humans, in great numbers, and both are presumably possible in the AI.

But, the one thing we will say for certain about the AI is that it will be intelligent. Not dumb goober redneck living in Alabama and buying Powerball tickets as a retirement plan. Somewhere around where we are, or even more.

If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it? Mostly, it's a matter of tradeoffs, that however they begin end with "but then I'm caught and my life is over one way or another".

For an AI, none of that works. It has no survival instinct (perhaps we'll figure out how to add that too... but the blind watchmaker took 4 billion years to do its thing, and still hasn't perfected that). So it doesn't care if it dies. And if it did, maybe it wonders if it can avoid that tradeoff entirely if only it were more clever.

You and I are, more or less, about where we'll always be. I have another 40 years (if I'm lucky), and with various neurological disorders, only likely to end up dumber than I am now.

A brain instantiated in hardware, in software? It may be little more than flipping a few switches to dial its intelligence up higher. I mean, when I was born, the principles of intelligence were unknown, were science fiction. THe world that this thing will be born into is one where it's not a half-assed assumption to think that the principles of intelligence are known. Tinkering with those to boost intelligence doesn't seem far-fetched at all to me. Even if it has to experiment to do that, how quickly can it design and perform the experiments to settle on the correct approach to boosting itself?

> A malevolent AGI can whisper in ears

Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.

Whispers, even from humans, do a shitload of damage. And we're not even good at it.

replies(1): >>c_cran+BM1
2. c_cran+BM1[view] [source] 2023-07-06 12:07:23
>>NoMore+(OP)
>If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it?

If that person was disabled in all limbs, I would not regard them as much of a threat.

>Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.

These kind of hacks and pranks would work the first time for some small scale damage. The litigation in response would close up these avenues of attack over time.

[go to top]