zlacker

[return to "Introducing Superalignment"]
1. Chicag+m9[view] [source] 2023-07-05 17:40:08
>>tim_sw+(OP)
From a layman's perspective when it comes to cutting edge AI, I can't help but be a bit turned off by some of the copy. It seems it goes out of its way to use purposefully exhuberant language as a way to make the risks seem even more significant, just so as an offshoot it implies that the technology being worked on is so advanced. I'm trying to understand why it rubs me particularly the wrong way here, when, frankly, it is just about the norm anywhere else? (see tesla with FSD, etc.)
◧◩
2. goneho+gf[view] [source] 2023-07-05 17:58:33
>>Chicag+m9
The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously. People often have an initial knee-jerk negative reaction to it (for not crazy reasons, lots of stuff is often overhyped), but that doesn't make it wrong.

It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/

It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.

Others have also changed their mind when they looked, for example:

- https://twitter.com/repligate/status/1676507258954416128?s=2...

- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...

For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...

◧◩◪
3. jonath+tU[view] [source] 2023-07-05 20:49:31
>>goneho+gf
> The extinction risk from unaligned supterintelligent AGI is real, it's just often dismissed (imo) because it's outside the window of risks that are acceptable and high status to take seriously.

No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.

It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!

Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.

◧◩◪◨
4. NoMore+z21[view] [source] 2023-07-05 21:26:34
>>jonath+tU
> This is literally the plot to a B-movie.

Are there never any B movies with realistic plots? Is that some sort of serious rebuttal?

> Sometime in the near future this all powerful being will kill us all by somehow

The trouble here is that the people who talk like you are simply incapable of imagining anyone more intelligent than themselves.

It's not that you have trouble imagining artificial intelligence... if you were incapable of that in the technology industry, everyone would just think you an imbecile.

And it's not that you have trouble imagining malevolent intelligences. Sure, they're far away from you, but the accounts of such people are well-documented and taken as a given. If you couldn't imagine them, people would just call you naive. Gullible even.

So, a malevolent artificial intelligence is just some potential or another you've never bothered to calculate because, whether that is a 0.01% risk, or a 99% risk, you'll still be more intelligent than it. Hell, this isn't a neutral outcome, maybe you'll even get to play hero.

> Care not about your racist algorithms! For someday soon

Haha. That's what you're worried about? I don't know that there is such a thing as a racist algorithm, except those which run inside meat brains. Tell me why some double digit percentage of asians are not admitted to the top schools, that's the racist algorithm.

Maybe if logical systems seem racist, it's because your ideas about racism are distant from and unfamiliar with reality.

◧◩◪◨⬒
5. c_cran+n31[view] [source] 2023-07-05 21:30:04
>>NoMore+z21
I, and most people, can imagine something smarter than ourselves. What's harder to imagine is how just being smarter correlates to extinction levels of arbitrary power.

A malevolent AGI can whisper in ears, it can display mean messages, perhaps it can even twitch whatever physical components happen to be hooked up to old Windows 95 computers... not that scary.

◧◩◪◨⬒⬓
6. ben_w+Kj1[view] [source] 2023-07-05 23:05:15
>>c_cran+n31
How many political or business leaders personally did the deeds, good or ill, that are attributed to them?

George Washington didn't personally fight off all the British single-handed, he and his co-conspirators used eloquence to convince people to follow them to freedom; Stalin didn't personally take food from the mouths of starving Ukranians, he inspired fear that led to policies which had this effect; Musk didn't weld the seams of every Tesla or Falcon, nor dig tunnels or build TBMs for TBC, nor build the surgical robot that installed Neuralink chips, he convinced people his vision of the future was one worth the effort; and Indra Nooyi doesn't personally fill up all the world's Pepsi bottles, that's something I assume[0] is done with several layers of indirection via paying people to pay people to pay people to fill the bottles.

[0] I've not actually looked at the org chart because this is rhetorical and I don't care

◧◩◪◨⬒⬓⬔
7. c_cran+0U2[view] [source] 2023-07-06 12:05:42
>>ben_w+Kj1
The methods by which humans coerce and control other humans do not rely on plain intelligence alone. That much is clear, as George Washington and Stalin were not the smartest men in the room.
◧◩◪◨⬒⬓⬔⧯
8. NoMore+x73[view] [source] 2023-07-06 13:25:00
>>c_cran+0U2
So this is down to your poor definition of intelligence?

For you, it's always the homework problems that your teacher assigned you in grade school, nothing else is intelligent. What to say to someone to have them be your friend on the playground, that never counted. Where and when to show up (or not), so that the asshole 4 grades above you didn't push you down into the mud... not intelligence. What to wear, what things to concentrate on about your appearance, how to speak, which friendships and romances to pursue, etc.

All just "animal cunning". The only real intelligence is how to work through calculus problem number three.

They were smart enough at these things that they did it without even consciously thinking about it. They were savants at it. I don't think the AI has to be a savant though, it just has to be able to come up with the right answers and responses and quickly enough that it can act on those.

◧◩◪◨⬒⬓⬔⧯▣
9. c_cran+y83[view] [source] 2023-07-06 13:29:16
>>NoMore+x73
I don't define cunning and strength as intelligence, even if they are more useful for shoving someone into the mud. Intelligence is a measure of the ability to understand and solve abstract problems, not to be rich and famous.
◧◩◪◨⬒⬓⬔⧯▣▦
10. ben_w+3g3[view] [source] 2023-07-06 14:02:04
>>c_cran+y83
Cunning absolutely should count as an aspect of intelligence.

If this is just a definitions issue, s/artificial intelligence/artificial cunning/g to the same effect.

Strength seems somewhat irrelevant either way, given the existence of Windows for Warships[0].

[0] not the real name: https://en.wikipedia.org/wiki/Submarine_Command_System

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. c_cran+ti3[view] [source] 2023-07-06 14:10:13
>>ben_w+3g3
Emotional intelligence is sometimes defined in a way to encapsulate some of the values of cunning. Sometimes it correlates with power, but sometimes it does not. To get power in a human civilization also seems to require a great deal of luck, just due to the general chaotic system that is the world, and a good deal of presence. The decisions that decide the fate of the world happen in the smoky backdoor rooms, not exclusively over zoom calls with an AI generated face.
[go to top]