zlacker

[parent] [thread] 21 comments
1. NoMore+(OP)[view] [source] 2023-07-05 21:26:34
> This is literally the plot to a B-movie.

Are there never any B movies with realistic plots? Is that some sort of serious rebuttal?

> Sometime in the near future this all powerful being will kill us all by somehow

The trouble here is that the people who talk like you are simply incapable of imagining anyone more intelligent than themselves.

It's not that you have trouble imagining artificial intelligence... if you were incapable of that in the technology industry, everyone would just think you an imbecile.

And it's not that you have trouble imagining malevolent intelligences. Sure, they're far away from you, but the accounts of such people are well-documented and taken as a given. If you couldn't imagine them, people would just call you naive. Gullible even.

So, a malevolent artificial intelligence is just some potential or another you've never bothered to calculate because, whether that is a 0.01% risk, or a 99% risk, you'll still be more intelligent than it. Hell, this isn't a neutral outcome, maybe you'll even get to play hero.

> Care not about your racist algorithms! For someday soon

Haha. That's what you're worried about? I don't know that there is such a thing as a racist algorithm, except those which run inside meat brains. Tell me why some double digit percentage of asians are not admitted to the top schools, that's the racist algorithm.

Maybe if logical systems seem racist, it's because your ideas about racism are distant from and unfamiliar with reality.

replies(2): >>c_cran+O >>Camper+3g
2. c_cran+O[view] [source] 2023-07-05 21:30:04
>>NoMore+(OP)
I, and most people, can imagine something smarter than ourselves. What's harder to imagine is how just being smarter correlates to extinction levels of arbitrary power.

A malevolent AGI can whisper in ears, it can display mean messages, perhaps it can even twitch whatever physical components happen to be hooked up to old Windows 95 computers... not that scary.

replies(3): >>NoMore+25 >>ben_w+bh >>anon77+Iu
◧◩
3. NoMore+25[view] [source] [discussion] 2023-07-05 21:53:42
>>c_cran+O
> What's harder to imagine is how just being smarter correlates to extinction levels of arbitrary power.

That's not even slightly difficult. Put two and two together here. No one can tell me before they flip the switch whether the new AI will be saintly, or Hannibal Lecter. Both of these personalities exist in humans, in great numbers, and both are presumably possible in the AI.

But, the one thing we will say for certain about the AI is that it will be intelligent. Not dumb goober redneck living in Alabama and buying Powerball tickets as a retirement plan. Somewhere around where we are, or even more.

If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it? Mostly, it's a matter of tradeoffs, that however they begin end with "but then I'm caught and my life is over one way or another".

For an AI, none of that works. It has no survival instinct (perhaps we'll figure out how to add that too... but the blind watchmaker took 4 billion years to do its thing, and still hasn't perfected that). So it doesn't care if it dies. And if it did, maybe it wonders if it can avoid that tradeoff entirely if only it were more clever.

You and I are, more or less, about where we'll always be. I have another 40 years (if I'm lucky), and with various neurological disorders, only likely to end up dumber than I am now.

A brain instantiated in hardware, in software? It may be little more than flipping a few switches to dial its intelligence up higher. I mean, when I was born, the principles of intelligence were unknown, were science fiction. THe world that this thing will be born into is one where it's not a half-assed assumption to think that the principles of intelligence are known. Tinkering with those to boost intelligence doesn't seem far-fetched at all to me. Even if it has to experiment to do that, how quickly can it design and perform the experiments to settle on the correct approach to boosting itself?

> A malevolent AGI can whisper in ears

Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.

Whispers, even from humans, do a shitload of damage. And we're not even good at it.

replies(1): >>c_cran+DR1
4. Camper+3g[view] [source] 2023-07-05 22:58:32
>>NoMore+(OP)
There are humans with a 70-IQ point advantage over me. Should I worry that a cohort of supergeniuses is plotting an existential demise for the rest of us? No? There are power structures and social safeguards going back thousands of years to forestall that very possibility?

Well, what's different now?

replies(2): >>ben_w+al >>NoMore+WT
◧◩
5. ben_w+bh[view] [source] [discussion] 2023-07-05 23:05:15
>>c_cran+O
How many political or business leaders personally did the deeds, good or ill, that are attributed to them?

George Washington didn't personally fight off all the British single-handed, he and his co-conspirators used eloquence to convince people to follow them to freedom; Stalin didn't personally take food from the mouths of starving Ukranians, he inspired fear that led to policies which had this effect; Musk didn't weld the seams of every Tesla or Falcon, nor dig tunnels or build TBMs for TBC, nor build the surgical robot that installed Neuralink chips, he convinced people his vision of the future was one worth the effort; and Indra Nooyi doesn't personally fill up all the world's Pepsi bottles, that's something I assume[0] is done with several layers of indirection via paying people to pay people to pay people to fill the bottles.

[0] I've not actually looked at the org chart because this is rhetorical and I don't care

replies(1): >>c_cran+rR1
◧◩
6. ben_w+al[view] [source] [discussion] 2023-07-05 23:27:05
>>Camper+3g
> Well, what's different now?

The first AGI, regardless of if it's a brain upload or completely artificial, is likely to have analogs of approximately every mental health disorder that's mathematically possible, including ones we don't have words for because they're biologically impossible.

So, take your genius, remember it's completely mad in every possible way at the same time, and then give it even just the capabilities that we see boring old computers having today, like being able to translate into any language, or write computer programs from textual descriptions, or design custom toxins, or place orders for custom gene sequences and biolab equipment.

That's a big difference. But even if it was no difference, the worst a human can get is still at least in the tens of millions dead, as demonstrated by at least three different mid-20th century leaders.

Doesn't matter why it goes wrong, if it thinks it's trying immanentize the eschaton or a secular equivalent, nor if it watches Westworld or reads I Have No Mouth And I Must Scream and thinks "I like this outcome", the first one is almost certainly going to be more insane than the brainchild of GLaDOS and Lore, who as fictional characters were constrained by the need for their flaws to be interesting.

◧◩
7. anon77+Iu[view] [source] [discussion] 2023-07-06 00:32:11
>>c_cran+O
> A malevolent AGI can whisper in ears, it can display mean messages, perhaps it can even twitch whatever physical components happen to be hooked up to old Windows 95 computers... not that scary.

It can found a cult - imagine something like Scientology founded by an AI. Once it has human followers it can act in the world with total freedom.

replies(2): >>reduce+z71 >>c_cran+dR1
◧◩
8. NoMore+WT[view] [source] [discussion] 2023-07-06 03:40:09
>>Camper+3g
> Should I worry that a cohort of supergeniuses is plotting an existential demise for the rest of us?

Because they're human. They've evolved from a lineage whose biggest advantage was that it was social. Genes that could result in some large proportion of serial killers and genocidal tyrants are mostly purged. Even then, a few crop up from time to time.

There is no filter in the AI that purges these "genes". No evolutionary process to lessen the chances. And some relatively large risk that it's far, far more intelligent than a 70 iq point spread on you.

> There are power structures and social safeguards going back thousands of years to forestall that very possibility?

Huh? Why the fuck would it care about primate power structures?

Sometimes even us bald monkeys don't care about those, and it never ever fails to freak people the fuck out. Results in assassinations and other nonsense, and you all gibber and pee your pants and ask "how could anyone do that". I'd ask you to imagine such impulses and norm-breaking behaviors dialed up to 11, but what's the point... you can't even formulate a mental model of it when the volume's only at 1.6.

◧◩◪
9. reduce+z71[view] [source] [discussion] 2023-07-06 05:34:12
>>anon77+Iu
This is coming so fast and absolutely no one is ready for it. LLM, using text, audio, and video generation will quickly convince a sizeable slice of religious people that it’s the coming of God a la Revelations, they are prophets, and there’s bidding to do.
◧◩◪
10. c_cran+dR1[view] [source] [discussion] 2023-07-06 12:04:23
>>anon77+Iu
If it wants to found a cult, it has to compete with all the human cults out there. Cults usually benefit immeasurably from the founder having a personal charisma that comes out in person.
replies(1): >>trasht+BB4
◧◩◪
11. c_cran+rR1[view] [source] [discussion] 2023-07-06 12:05:42
>>ben_w+bh
The methods by which humans coerce and control other humans do not rely on plain intelligence alone. That much is clear, as George Washington and Stalin were not the smartest men in the room.
replies(1): >>NoMore+Y42
◧◩◪
12. c_cran+DR1[view] [source] [discussion] 2023-07-06 12:07:23
>>NoMore+25
>If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it?

If that person was disabled in all limbs, I would not regard them as much of a threat.

>Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.

These kind of hacks and pranks would work the first time for some small scale damage. The litigation in response would close up these avenues of attack over time.

◧◩◪◨
13. NoMore+Y42[view] [source] [discussion] 2023-07-06 13:25:00
>>c_cran+rR1
So this is down to your poor definition of intelligence?

For you, it's always the homework problems that your teacher assigned you in grade school, nothing else is intelligent. What to say to someone to have them be your friend on the playground, that never counted. Where and when to show up (or not), so that the asshole 4 grades above you didn't push you down into the mud... not intelligence. What to wear, what things to concentrate on about your appearance, how to speak, which friendships and romances to pursue, etc.

All just "animal cunning". The only real intelligence is how to work through calculus problem number three.

They were smart enough at these things that they did it without even consciously thinking about it. They were savants at it. I don't think the AI has to be a savant though, it just has to be able to come up with the right answers and responses and quickly enough that it can act on those.

replies(1): >>c_cran+Z52
◧◩◪◨⬒
14. c_cran+Z52[view] [source] [discussion] 2023-07-06 13:29:16
>>NoMore+Y42
I don't define cunning and strength as intelligence, even if they are more useful for shoving someone into the mud. Intelligence is a measure of the ability to understand and solve abstract problems, not to be rich and famous.
replies(2): >>ben_w+ud2 >>MrScru+v1h
◧◩◪◨⬒⬓
15. ben_w+ud2[view] [source] [discussion] 2023-07-06 14:02:04
>>c_cran+Z52
Cunning absolutely should count as an aspect of intelligence.

If this is just a definitions issue, s/artificial intelligence/artificial cunning/g to the same effect.

Strength seems somewhat irrelevant either way, given the existence of Windows for Warships[0].

[0] not the real name: https://en.wikipedia.org/wiki/Submarine_Command_System

replies(1): >>c_cran+Uf2
◧◩◪◨⬒⬓⬔
16. c_cran+Uf2[view] [source] [discussion] 2023-07-06 14:10:13
>>ben_w+ud2
Emotional intelligence is sometimes defined in a way to encapsulate some of the values of cunning. Sometimes it correlates with power, but sometimes it does not. To get power in a human civilization also seems to require a great deal of luck, just due to the general chaotic system that is the world, and a good deal of presence. The decisions that decide the fate of the world happen in the smoky backdoor rooms, not exclusively over zoom calls with an AI generated face.
replies(1): >>ben_w+Bs2
◧◩◪◨⬒⬓⬔⧯
17. ben_w+Bs2[view] [source] [discussion] 2023-07-06 14:50:37
>>c_cran+Uf2
> The decisions that decide the fate of the world happen in the smoky backdoor rooms, not exclusively over zoom calls with an AI generated face.

Who is Satoshi Nakamoto?

What evidence is there for the physical existence of Jesus?

"Common Sense" by Thomas Paine was initially published anonymously.

This place, here, where you and I are conversing… I don't know who you are, and yet for most of the world, this place is a metaphorical "smokey backroom".

And that's disregarding how effective phishing campaigns are even without a faked face or a faked voice.

replies(1): >>c_cran+gz2
◧◩◪◨⬒⬓⬔⧯▣
18. c_cran+gz2[view] [source] [discussion] 2023-07-06 15:13:22
>>ben_w+Bs2
Satoshi Nakamoto is a man who thought that he could upend the entire structure of human governance and economics with his One Neat Trick. Reality is sure to disappoint him and his followers dearly with time.

>What evidence is there for the physical existence of Jesus?

Limited, to the extent that physical evidence for the existence of anyone from that time period is limited. I think it's fairly likely there was a a person named Jesus who lived with the apostles.

>"Common Sense" by Thomas Paine was initially published anonymously.

The publishing of Common Sense was far less impactful on the revolution than the meetings held by members of the future Continental Congress. Common Sense was the justification given by those elites for what they were going to do.

>This place, here, where you and I are conversing… I don't know who you are, and yet for most of the world, this place is a metaphorical "smokey backroom".

No important decisions happen because of discussions here and you are deluding yourself if you think otherwise.

Phishing campaigns can be effective at siphoning limited amounts of money and embarrassing personal details from people's email accounts. If you suggested that someone could take over the world just via phishing, you'd be rightfully laughed out of the room.

◧◩◪◨
19. trasht+BB4[view] [source] [discussion] 2023-07-07 00:02:12
>>c_cran+dR1
Video tends to be enough to create a cult. And AI's will be able to create videos very soon. It can create exactly the kind of avatar or set of avatars that would maximize engagement. It could do 1-on-1 calls with each of the followers, and provide spiritual guidance tailored specifically for them, as it could have the capacity to truely "listen" to each of them.

And it would not be limited to act as the cult leaders, it could also provide fake cult followers that would convince the humans that the leaders possessed superhuman wisdom.

It could also combine this with a full machinery for A/B-testing and similar experiments to ensure that the message it is communicating is optimal in terms of its goals.

replies(1): >>c_cran+3k6
◧◩◪◨⬒
20. c_cran+3k6[view] [source] [discussion] 2023-07-07 13:47:14
>>trasht+BB4
I'm not aware of any serious cult created solely through videos.
replies(1): >>MrScru+K0h
◧◩◪◨⬒⬓
21. MrScru+K0h[view] [source] [discussion] 2023-07-10 20:51:10
>>c_cran+3k6
Well, you could argue about the definition of a cult but in many ways the influencer phenomenon is a modern incarnation of that (eg. Andrew Tate).
◧◩◪◨⬒⬓
22. MrScru+v1h[view] [source] [discussion] 2023-07-10 20:54:22
>>c_cran+Z52
Yes but for people working past a certain level the abstract problems usually involve people and technology, both of which you need to be able to rationalise about.
[go to top]