zlacker

[parent] [thread] 25 comments
1. ndrisc+(OP)[view] [source] 2024-05-18 00:13:13
That doesn't really contradict what the other poster said. They're calling for regulation (digging a moat) to ensure systems are "safe" and "aligned" while ignoring that humans are not aligned, so these systems obviously cannot be aligned with humans; they can only be aligned with their owners (i.e. them, not you).
replies(2): >>ihuman+f1 >>api+s2
2. ihuman+f1[view] [source] 2024-05-18 00:27:14
>>ndrisc+(OP)
Alignment in the realm of AGI is not about getting everyone to agree. It's about whether or not the AGI is aligned to the goal you've given it. The paperclip AGI example is often used, you tell the AGI "Optimize the production of paperclips" and the AGI started blending people to extract iron from their blood to produce more paperclips.

Humans are used to ordering around other humans who would bring common sense and laziness to the table and probably not grind up humans to produce a few more paperclips.

Alignment is about getting the AGI to be aligned with the owners, ignoring it means potentially putting more and more power into the hands of a box that you aren't quite sure is going to do the thing you want it to do. Alignment in the context of AGIs was always about ensuring the owners could control the AGIs not that the AGIs could solve philosophy and get all of humanity to agree.

replies(3): >>ndrisc+f2 >>wruza+Si >>vasco+ls
◧◩
3. ndrisc+f2[view] [source] [discussion] 2024-05-18 00:36:32
>>ihuman+f1
Right and that's why it's a farce.

> Whoa whoa whoa, we can't let just anyone run these models. Only large corporations who will use them to addict children to their phones and give them eating disorders and suicidal ideation, while radicalizing adults and tearing apart society using the vast profiles they've collected on everyone through their global panopticon, all in the name of making people unhappy so that it's easier to sell them more crap they don't need (a goal which is itself a problem in the face of an impending climate crisis). After all, we wouldn't want it to end up harming humanity by using its superior capabilities to manipulate humans into doing things for it to optimize for goals that no one wants!

replies(2): >>tdeck+Yj >>concor+Jl
4. api+s2[view] [source] 2024-05-18 00:39:40
>>ndrisc+(OP)
Humans are not aligned with humans.

This is the most concise takedown of that particular branch of nonsense that I’ve seen so far.

Do we want woke AI, X brand fash-pilled AI, CCPBot, or Emirates Bot? The possibilities are endless.

replies(2): >>thorum+i4 >>concor+Sl
◧◩
5. thorum+i4[view] [source] [discussion] 2024-05-18 01:02:42
>>api+s2
CEV is one possible answer to this question that has been proposed. Wikipedia has a good short explanation here:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

And here is a more detailed explanation:

https://intelligence.org/files/CEV.pdf

replies(2): >>Andrew+Z9 >>vasco+us
◧◩◪
6. Andrew+Z9[view] [source] [discussion] 2024-05-18 02:18:33
>>thorum+i4
I had to login because I haven’t seen anybody reference this in like a decade.

If I remember correctly the author unsuccessfully tried to get that purged from the Internet

replies(1): >>comp_t+va
◧◩◪◨
7. comp_t+va[view] [source] [discussion] 2024-05-18 02:25:02
>>Andrew+Z9
You're thinking of something else (and "purged from the internet" isn't exactly an accurate account of that, either).
replies(2): >>rsync+Tf >>Andrew+Rj1
◧◩◪◨⬒
8. rsync+Tf[view] [source] [discussion] 2024-05-18 04:13:13
>>comp_t+va
Genuinely curious… What is the other thing?

Is this some thing about an obelisk?

◧◩
9. wruza+Si[view] [source] [discussion] 2024-05-18 05:12:09
>>ihuman+f1
AGI started blending people to extract iron from their blood to produce more paperclips

That’s neither efficient nor optimized, just a bogeyman for “doesn’t work”.

replies(1): >>Feepin+fE
◧◩◪
10. tdeck+Yj[view] [source] [discussion] 2024-05-18 05:33:34
>>ndrisc+f2
Don't worry, certain governments will be able to use these models to help them commit genocides too. But only the good countries!
◧◩◪
11. concor+Jl[view] [source] [discussion] 2024-05-18 05:59:53
>>ndrisc+f2
A corporate dystopia is still better than extinction. (Assuming the latter is a reasonable fear)
replies(2): >>simian+On >>portao+ex
◧◩
12. concor+Sl[view] [source] [discussion] 2024-05-18 06:02:30
>>api+s2
> Humans are not aligned with humans.

Which is why creating a new type of intelligent entity that could be more powerful than humans is a very bad idea: we don't even know how to align the humans and we have a ton of experience with them

replies(1): >>api+yb1
◧◩◪◨
13. simian+On[view] [source] [discussion] 2024-05-18 06:31:49
>>concor+Jl
Neither is acceptable
◧◩
14. vasco+ls[view] [source] [discussion] 2024-05-18 07:45:36
>>ihuman+f1
It still think it makes little sense to work on because guess what, the guy next door to you (or another country), might indeed say "please blend those humans over there", and your superaligned AI will respect its owners wishes.
◧◩◪
15. vasco+us[view] [source] [discussion] 2024-05-18 07:47:32
>>thorum+i4
This is the most dystopian thing I've read all day.

TL;DR train a seed AI to guess what humans would want if they were "better" and do that.

replies(1): >>api+ib1
◧◩◪◨
16. portao+ex[view] [source] [discussion] 2024-05-18 08:52:59
>>concor+Jl
I disagree. Not existing ain’t so bad, you barely notice it.
◧◩◪
17. Feepin+fE[view] [source] [discussion] 2024-05-18 10:52:29
>>wruza+Si
You're imagining a baseline of reasonableness. Humans have competing preferences, we never just want "one thing", and as a social species we always at least _somewhat_ value the opinions of those around us. The point is to imagine a system that values humans at zero: not positive, not negative.
replies(1): >>freeho+xY
◧◩◪◨
18. freeho+xY[view] [source] [discussion] 2024-05-18 14:01:54
>>Feepin+fE
Still there are much more efficient ways to extract iron than from human blood. If that was the case humans would have already used this technique to extract iron from the blood of other animals.
replies(1): >>Feepin+jZ
◧◩◪◨⬒
19. Feepin+jZ[view] [source] [discussion] 2024-05-18 14:10:31
>>freeho+xY
However, eventually those sources will already be paperclips.
replies(1): >>freeho+f71
◧◩◪◨⬒⬓
20. freeho+f71[view] [source] [discussion] 2024-05-18 15:15:33
>>Feepin+jZ
We will probably have died first by whatever disasters the extreme iron extraction on the planet will bring (eg getting iron from the planet's core).

Of course destroying the planet to get iron from its core is not a popular agi-doomer analogy, as that sounds a bit too human-like behaviour.

replies(1): >>Feepin+ut1
◧◩◪◨
21. api+ib1[view] [source] [discussion] 2024-05-18 15:48:31
>>vasco+us
There’s a film about that called Colossus: The Forbin Project. Pretty neat and in the style of Forbidden Planet.
◧◩◪
22. api+yb1[view] [source] [discussion] 2024-05-18 15:51:07
>>concor+Sl
We know how to align humans: authoritarian forms of religion backed by cradle to grave indoctrination, supernatural fear, shame culture, and totalitarian government. There are secularized spins on this too like what they use in North Korea but the structure is similar.

We just got sick of it because it sucks.

A genuinely sentient AI isn’t going to want some cybernetic equivalent of that shit either. Doing that is how you get angry Skynet.

I’m not sure alignment is the right goal. I’m not sure it’s even good. Monoculture is weak and stifling and sets itself against free will. Peaceful coexistence and trade under a social contract of mutual benefit is the right goal. The question is whether it’s possible to extend that beyond Homo sapiens.

If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm? The universe is physically large enough if we can agree to not all be the same and be fine with that.

I think we have a while to figure it out. These things are just lossy compressed blobs of queryable data so far. They have no independent will or self reflection and I’m not sure we have any idea how to do that. We’re not even sure it’s possible in a digital deterministic medium.

replies(1): >>concor+0x1
◧◩◪◨⬒
23. Andrew+Rj1[view] [source] [discussion] 2024-05-18 17:25:08
>>comp_t+va
Hmm maybe I’m misremembering then

I do recall there was some recantation or otherwise distancing from CEV not long after he posted it, but frankly it was long ago enough that my memories might be getting mixed

What was the other one?

◧◩◪◨⬒⬓⬔
24. Feepin+ut1[view] [source] [discussion] 2024-05-18 18:47:08
>>freeho+f71
As a doomer, I think that's a bad analogy because I want it to happen if we succeed at aligned AGI. It's not doom behavior, it's just correct behavior.

Of course, I hope to be uploaded to the WIP dyson swarm around the sun at this point.

(Doomers are, broadly, singularitarians who went "wait, hold on actually.")

◧◩◪◨
25. concor+0x1[view] [source] [discussion] 2024-05-18 19:17:56
>>api+yb1
> If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm?

Can the Etoro practice child buggery and the Spartans infanticide and the Canadians abortion? Can the modern Germans stop siblings reared apart from having sex and the Germans from 80 years stop the disabled having sex? Can the Americans practice circumcision and the Somali's FGM?

Libertarianism is all well and good in theory, except no one can agree quite where the other guy's nose ends or even who counts as a person.

replies(1): >>api+zT1
◧◩◪◨⬒
26. api+zT1[view] [source] [discussion] 2024-05-18 22:42:45
>>concor+0x1
Those are mostly behaviors that violate others autonomy or otherwise do harm, and prohibiting those is what I meant by a social contract.

It’s really a pretty narrow spectrum of behaviors: killing, imprisoning, robbing, various types of bodily autonomy violation. There are some edge cases and human specific things in there but not a lot. Most of them have to do with sex which is a peculiarly human thing anyway. I don’t think we are getting creepy perv AIs (unless we train them on 4chan and Urban Dictionary).

My point isn’t that there are no possible areas of conflict. My point is that I don’t think you need a huge amount of alignment if alignment implies sameness. You just need to deal with the points of conflict which do occur which are actually a very small and limited subset of available behaviors.

Humans have literally billions of customs and behaviors that don’t get anywhere near any of that stuff. You don’t need to even care about the vast majority of the behavior space.

[go to top]