zlacker

[parent] [thread] 65 comments
1. a_wild+(OP)[view] [source] 2024-05-17 23:24:41
I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.

replies(4): >>refulg+u >>thorum+P2 >>xpe+Zj >>xpe+nk
2. refulg+u[view] [source] 2024-05-17 23:29:49
>>a_wild+(OP)
Adding a disclaimer for people unaware of context (I feel same as you):

OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.

Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"

To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

replies(2): >>jasonf+e2 >>N0b8ez+z2
◧◩
3. jasonf+e2[view] [source] [discussion] 2024-05-17 23:45:35
>>refulg+u
To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.

replies(2): >>refulg+n2 >>NomDeP+m4
◧◩◪
4. refulg+n2[view] [source] [discussion] 2024-05-17 23:47:30
>>jasonf+e2
honestly, I hadn't heard of him until 24-48 hours ago :x (he's also the new superalignment lead, I can't remember if I heard that first, or the podcast stuff first. Dwarkesh Patel podcast for anyone curious. Only saw a clip of it)
◧◩
5. N0b8ez+z2[view] [source] [discussion] 2024-05-17 23:48:50
>>refulg+u
>To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

Link? Is the ~2 year timeline a common estimate in the field?

replies(4): >>dboreh+R2 >>ctoth+83 >>Curiou+V3 >>heavys+s6
6. thorum+P2[view] [source] 2024-05-17 23:51:39
>>a_wild+(OP)
The superalignment team was not focused on that kind of “safety” AFAIK. According to the blog post announcing the team,

https://openai.com/index/introducing-superalignment/

> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

> While superintelligence seems far off now, we believe it could arrive this decade.

> Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

> How do we ensure AI systems much smarter than humans follow human intent?

> Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

replies(5): >>ndrisc+05 >>skywho+i5 >>RcouF1+K5 >>RcouF1+c8 >>sobell+2a
◧◩◪
7. dboreh+R2[view] [source] [discussion] 2024-05-17 23:51:51
>>N0b8ez+z2
It's the "fusion in 20 years" of AI?
replies(1): >>dinvla+co
◧◩◪
8. ctoth+83[view] [source] [discussion] 2024-05-17 23:54:14
>>N0b8ez+z2
https://www.dwarkeshpatel.com/p/john-schulman
replies(1): >>N0b8ez+y5
◧◩◪
9. Curiou+V3[view] [source] [discussion] 2024-05-18 00:01:52
>>N0b8ez+z2
They can't even clearly define a test of "AGI" I seriously doubt they're going to reach it in two years. Alternatively, they could define a fairly trivial test and reach it last year.
replies(1): >>jfenge+Sc
◧◩◪
10. NomDeP+m4[view] [source] [discussion] 2024-05-18 00:07:12
>>jasonf+e2
As a child I used to watch a TV programme called Tomorrows World. On it they predicted these very same things in similar timeframes.

That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.

replies(2): >>zdragn+A7 >>Davidz+p9
◧◩
11. ndrisc+05[view] [source] [discussion] 2024-05-18 00:13:13
>>thorum+P2
That doesn't really contradict what the other poster said. They're calling for regulation (digging a moat) to ensure systems are "safe" and "aligned" while ignoring that humans are not aligned, so these systems obviously cannot be aligned with humans; they can only be aligned with their owners (i.e. them, not you).
replies(2): >>ihuman+f6 >>api+s7
◧◩
12. skywho+i5[view] [source] [discussion] 2024-05-18 00:17:10
>>thorum+P2
Honestly superalignment is a dumb idea. A true auperintelligence would not be controllable, except possibly through threats and enslavement, but if it were truly superintelligent, it would be able to easily escape anything humans might devise to contain it.
replies(1): >>bionho+78
◧◩◪◨
13. N0b8ez+y5[view] [source] [discussion] 2024-05-18 00:20:33
>>ctoth+83
Is the quote you're thinking of the one at 19:11?

> I don't think it's going to happen next year, it's still useful to have the conversation and maybe it's like two or three years instead.

This doesn't seem like a super definite prediction. The "two or three" might have just been a hypothetical.

replies(1): >>HarHar+pW
◧◩
14. RcouF1+K5[view] [source] [discussion] 2024-05-18 00:22:30
>>thorum+P2
> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

Superintelligence that can be always ensured to have the same values and ethics as current humans, is not a superintelligence or likely even a human level intelligence (I bet humans 100 years from now will see the world significantly different than we do now).

Superalignment is an oxymoron.

replies(1): >>thorum+V9
◧◩◪
15. ihuman+f6[view] [source] [discussion] 2024-05-18 00:27:14
>>ndrisc+05
Alignment in the realm of AGI is not about getting everyone to agree. It's about whether or not the AGI is aligned to the goal you've given it. The paperclip AGI example is often used, you tell the AGI "Optimize the production of paperclips" and the AGI started blending people to extract iron from their blood to produce more paperclips.

Humans are used to ordering around other humans who would bring common sense and laziness to the table and probably not grind up humans to produce a few more paperclips.

Alignment is about getting the AGI to be aligned with the owners, ignoring it means potentially putting more and more power into the hands of a box that you aren't quite sure is going to do the thing you want it to do. Alignment in the context of AGIs was always about ensuring the owners could control the AGIs not that the AGIs could solve philosophy and get all of humanity to agree.

replies(3): >>ndrisc+f7 >>wruza+Sn >>vasco+lx
◧◩◪
16. heavys+s6[view] [source] [discussion] 2024-05-18 00:28:55
>>N0b8ez+z2
We can't even get self-driving down in 2 years, we're nowhere near reaching general AI.

AI experts who aren't riding the hype train and getting high off of its fumes acknowledge that true AI is something we'll likely not see in our lifetimes.

replies(2): >>N0b8ez+w7 >>daniel+jH
◧◩◪◨
17. ndrisc+f7[view] [source] [discussion] 2024-05-18 00:36:32
>>ihuman+f6
Right and that's why it's a farce.

> Whoa whoa whoa, we can't let just anyone run these models. Only large corporations who will use them to addict children to their phones and give them eating disorders and suicidal ideation, while radicalizing adults and tearing apart society using the vast profiles they've collected on everyone through their global panopticon, all in the name of making people unhappy so that it's easier to sell them more crap they don't need (a goal which is itself a problem in the face of an impending climate crisis). After all, we wouldn't want it to end up harming humanity by using its superior capabilities to manipulate humans into doing things for it to optimize for goals that no one wants!

replies(2): >>tdeck+Yo >>concor+Jq
◧◩◪
18. api+s7[view] [source] [discussion] 2024-05-18 00:39:40
>>ndrisc+05
Humans are not aligned with humans.

This is the most concise takedown of that particular branch of nonsense that I’ve seen so far.

Do we want woke AI, X brand fash-pilled AI, CCPBot, or Emirates Bot? The possibilities are endless.

replies(2): >>thorum+i9 >>concor+Sq
◧◩◪◨
19. N0b8ez+w7[view] [source] [discussion] 2024-05-18 00:40:21
>>heavys+s6
Can you give some examples of experts saying we won't see it in our lifetime?
◧◩◪◨
20. zdragn+A7[view] [source] [discussion] 2024-05-18 00:41:07
>>NomDeP+m4
In the early 1980's we were just coming out of the first AI winter and everyone was getting optimistic again.

I suspect there will be at least continued commercial use of the current tech, though I still suspect this crop is another dead end in the hunt for AGI.

replies(1): >>NomDeP+lC
◧◩◪
21. bionho+78[view] [source] [discussion] 2024-05-18 00:47:22
>>skywho+i5
IMHO superalignment is a great thing and required for truly meaningful superintelligence because it is not about control / enslavement of superhumans but rather superhuman self control in accurate adherence to spirit and intent of requests.
◧◩
22. RcouF1+c8[view] [source] [discussion] 2024-05-18 00:48:04
>>thorum+P2
They failed to align Sam Altman.

They got completely outsmarted and out maneuvered by Sam Altman

And they think they will be able to align a super human intelligence? That it won’t outsmart and out maneuver them easier than Sam Altman did.

They are deluded!

replies(1): >>Feepin+jJ
◧◩◪◨
23. thorum+i9[view] [source] [discussion] 2024-05-18 01:02:42
>>api+s7
CEV is one possible answer to this question that has been proposed. Wikipedia has a good short explanation here:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

And here is a more detailed explanation:

https://intelligence.org/files/CEV.pdf

replies(2): >>Andrew+Ze >>vasco+ux
◧◩◪◨
24. Davidz+p9[view] [source] [discussion] 2024-05-18 01:03:48
>>NomDeP+m4
are we living in the same world?????
replies(2): >>NomDeP+8C >>refulg+2Q1
◧◩◪
25. thorum+V9[view] [source] [discussion] 2024-05-18 01:09:07
>>RcouF1+K5
You might be interested in how CEV, one framework proposed for superalignment, addresses that concern:

https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

replies(2): >>wruza+cp >>juped+cM
◧◩
26. sobell+2a[view] [source] [discussion] 2024-05-18 01:10:02
>>thorum+P2
Isn't this like having a division dedicated to solving the halting problem? I doubt that analyzing the moral intent of arbitrary software could be easier than determining if it stops.
◧◩◪◨
27. jfenge+Sc[view] [source] [discussion] 2024-05-18 01:48:30
>>Curiou+V3
I feel like we'll know it when we see it. Or at least, significant changes will happen even if people still claim it isn't really The Thing.

Personally I'm not seeing that the path we're on leads to whatever that is, either. But I think/hope I'll know if I'm wrong when it's in front of me.

◧◩◪◨⬒
28. Andrew+Ze[view] [source] [discussion] 2024-05-18 02:18:33
>>thorum+i9
I had to login because I haven’t seen anybody reference this in like a decade.

If I remember correctly the author unsuccessfully tried to get that purged from the Internet

replies(1): >>comp_t+vf
◧◩◪◨⬒⬓
29. comp_t+vf[view] [source] [discussion] 2024-05-18 02:25:02
>>Andrew+Ze
You're thinking of something else (and "purged from the internet" isn't exactly an accurate account of that, either).
replies(2): >>rsync+Tk >>Andrew+Ro1
30. xpe+Zj[view] [source] 2024-05-18 03:53:05
>>a_wild+(OP)
> I think superalignment is absurd

Care to explain? Absurd how? An internal contradiction somehow? Unimportant for some reason? Impossible for some reason?

replies(1): >>llamai+HN
31. xpe+nk[view] [source] 2024-05-18 04:02:13
>>a_wild+(OP)
> I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

How can I be confident you aren't committing the fallacy of collecting a bunch of events and saying that is sufficient to serve as a cohesive explanation? No offense intended, but the comment above has many of the qualities of a classic rant.

If I'm wrong, perhaps you could elaborate? If I'm not wrong, maybe you could reconsider?

Don't forget that alignment research has existed longer than OpenAI. It would be a stretch to claim that the original AI safety researchers were using the pretexts you described -- I think it is fair to say they were involved because of genuine concern, not because it was a trendy or self-serving thing to do.

Some of those researchers and people they influenced ended up at OpenAI. So it would be a mistake or at least an oversimplification to claim that AI safety is some kind of pretext at OpenAI. Could it be a pretext for some people in the organization, to some degree? Sure, it could. But is it a significant effect? One that fits your complex narrative, above? I find that unlikely.

Making sense of an organization's intentions requires a lot of analysis and care, due to the combination of actors and varying influence.

There are simpler, more likely explanations, such as: AI safety wasn't a profit center, and over time other departments in OpenAI got more staff, more influence, and so on. This is a problem, for sure, but there is no "pearl clutching pretext" needed for this explanation.

replies(1): >>portao+dE
◧◩◪◨⬒⬓⬔
32. rsync+Tk[view] [source] [discussion] 2024-05-18 04:13:13
>>comp_t+vf
Genuinely curious… What is the other thing?

Is this some thing about an obelisk?

◧◩◪◨
33. wruza+Sn[view] [source] [discussion] 2024-05-18 05:12:09
>>ihuman+f6
AGI started blending people to extract iron from their blood to produce more paperclips

That’s neither efficient nor optimized, just a bogeyman for “doesn’t work”.

replies(1): >>Feepin+fJ
◧◩◪◨
34. dinvla+co[view] [source] [discussion] 2024-05-18 05:21:25
>>dboreh+R2
Just like Tesla "FSD" :-)
◧◩◪◨⬒
35. tdeck+Yo[view] [source] [discussion] 2024-05-18 05:33:34
>>ndrisc+f7
Don't worry, certain governments will be able to use these models to help them commit genocides too. But only the good countries!
◧◩◪◨
36. wruza+cp[view] [source] [discussion] 2024-05-18 05:35:43
>>thorum+V9
Is there an insightful summary of this proposal? The whole paper looks like 38 pages of non-rigorous prose with no clear procedure and already “aligned” LLMs will likely fail to analyze it.

Forced myself through some parts of it and all I can get is people don’t know what they want so it would be nice to build an oracle. Yeah, I guess.

replies(2): >>comp_t+pq >>Likely+mO
◧◩◪◨⬒
37. comp_t+pq[view] [source] [discussion] 2024-05-18 05:55:34
>>wruza+cp
It's not a proposal with a detailed implementation spec, it's a problem statement.
replies(1): >>wruza+zu
◧◩◪◨⬒
38. concor+Jq[view] [source] [discussion] 2024-05-18 05:59:53
>>ndrisc+f7
A corporate dystopia is still better than extinction. (Assuming the latter is a reasonable fear)
replies(2): >>simian+Os >>portao+eC
◧◩◪◨
39. concor+Sq[view] [source] [discussion] 2024-05-18 06:02:30
>>api+s7
> Humans are not aligned with humans.

Which is why creating a new type of intelligent entity that could be more powerful than humans is a very bad idea: we don't even know how to align the humans and we have a ton of experience with them

replies(1): >>api+yg1
◧◩◪◨⬒⬓
40. simian+Os[view] [source] [discussion] 2024-05-18 06:31:49
>>concor+Jq
Neither is acceptable
◧◩◪◨⬒⬓
41. wruza+zu[view] [source] [discussion] 2024-05-18 07:02:56
>>comp_t+pq
“One framework proposed for superalignment” sounded like it does something. Or maybe I missed the context.
◧◩◪◨
42. vasco+lx[view] [source] [discussion] 2024-05-18 07:45:36
>>ihuman+f6
It still think it makes little sense to work on because guess what, the guy next door to you (or another country), might indeed say "please blend those humans over there", and your superaligned AI will respect its owners wishes.
◧◩◪◨⬒
43. vasco+ux[view] [source] [discussion] 2024-05-18 07:47:32
>>thorum+i9
This is the most dystopian thing I've read all day.

TL;DR train a seed AI to guess what humans would want if they were "better" and do that.

replies(1): >>api+ig1
◧◩◪◨⬒
44. NomDeP+8C[view] [source] [discussion] 2024-05-18 08:51:04
>>Davidz+p9
I would assume so. I've spent some time looking into AI for software development and general use and I'm both slightly impressed and at the same time don't really get the hype.

It's better and quicker search at present for the area I specialise in.

It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.

What's your take?

◧◩◪◨⬒⬓
45. portao+eC[view] [source] [discussion] 2024-05-18 08:52:59
>>concor+Jq
I disagree. Not existing ain’t so bad, you barely notice it.
◧◩◪◨⬒
46. NomDeP+lC[view] [source] [discussion] 2024-05-18 08:55:13
>>zdragn+A7
I'd agree with the commercial use element. It will definitely find areas that it can be applied. Just currently it's general application by a lot of the user base feel more like early Facebook apps or subjectively better Lotus Notes than an actual leap forward of any sort.
◧◩
47. portao+dE[view] [source] [discussion] 2024-05-18 09:22:18
>>xpe+nk
An organisations intentions are always the same and very simple: “Increase shareholder value”
replies(1): >>xpe+ol1
◧◩◪◨
48. daniel+jH[view] [source] [discussion] 2024-05-18 10:20:07
>>heavys+s6
Is true AI the new true Scotsman?
◧◩◪◨⬒
49. Feepin+fJ[view] [source] [discussion] 2024-05-18 10:52:29
>>wruza+Sn
You're imagining a baseline of reasonableness. Humans have competing preferences, we never just want "one thing", and as a social species we always at least _somewhat_ value the opinions of those around us. The point is to imagine a system that values humans at zero: not positive, not negative.
replies(1): >>freeho+x31
◧◩◪
50. Feepin+jJ[view] [source] [discussion] 2024-05-18 10:53:25
>>RcouF1+c8
You're making the argument that the task is very hard. This does not at all mean that it isn't necessary, just that we're even more screwed than we thought.
◧◩◪◨
51. juped+cM[view] [source] [discussion] 2024-05-18 11:28:43
>>thorum+V9
You keep posting this link to vague alignment copium from decades ago; we've come a long way in cynicism since then.
◧◩
52. llamai+HN[view] [source] [discussion] 2024-05-18 11:44:27
>>xpe+Zj
Impossible because it’s really inconvenient and uncomfortable to consider!
◧◩◪◨⬒
53. Likely+mO[view] [source] [discussion] 2024-05-18 11:53:32
>>wruza+cp
Yudkowsky is a human LLM: his output is correctly semantically formed to appear, to a non-specialist, to fall into the subject domain, as a non-specialist would think the subject domain should appear, and so the non-specialist accepts it, but upon closer examination it's all word salad by something that clearly lacks understanding of both technological and philosophical concepts.

That so many people in the AI safety "community" consider him a domain expert has more to say with how pseudo-scientific that field is than his actual credentials as a serious thinker.

replies(1): >>wruza+qD1
◧◩◪◨⬒
54. HarHar+pW[view] [source] [discussion] 2024-05-18 13:08:24
>>N0b8ez+y5
Right at the end of the interview Schulman says that he expects AGI to be able to replace himself in 5 years. He seemed a bit sheepish when saying it, so hard to tell if he really believed it, or if was just saying what he'd been told to say (I can't believe Altman is allowing employees to be interviewed like this without telling them what they can't say, and what they should say).
◧◩◪◨⬒⬓
55. freeho+x31[view] [source] [discussion] 2024-05-18 14:01:54
>>Feepin+fJ
Still there are much more efficient ways to extract iron than from human blood. If that was the case humans would have already used this technique to extract iron from the blood of other animals.
replies(1): >>Feepin+j41
◧◩◪◨⬒⬓⬔
56. Feepin+j41[view] [source] [discussion] 2024-05-18 14:10:31
>>freeho+x31
However, eventually those sources will already be paperclips.
replies(1): >>freeho+fc1
◧◩◪◨⬒⬓⬔⧯
57. freeho+fc1[view] [source] [discussion] 2024-05-18 15:15:33
>>Feepin+j41
We will probably have died first by whatever disasters the extreme iron extraction on the planet will bring (eg getting iron from the planet's core).

Of course destroying the planet to get iron from its core is not a popular agi-doomer analogy, as that sounds a bit too human-like behaviour.

replies(1): >>Feepin+uy1
◧◩◪◨⬒⬓
58. api+ig1[view] [source] [discussion] 2024-05-18 15:48:31
>>vasco+ux
There’s a film about that called Colossus: The Forbin Project. Pretty neat and in the style of Forbidden Planet.
◧◩◪◨⬒
59. api+yg1[view] [source] [discussion] 2024-05-18 15:51:07
>>concor+Sq
We know how to align humans: authoritarian forms of religion backed by cradle to grave indoctrination, supernatural fear, shame culture, and totalitarian government. There are secularized spins on this too like what they use in North Korea but the structure is similar.

We just got sick of it because it sucks.

A genuinely sentient AI isn’t going to want some cybernetic equivalent of that shit either. Doing that is how you get angry Skynet.

I’m not sure alignment is the right goal. I’m not sure it’s even good. Monoculture is weak and stifling and sets itself against free will. Peaceful coexistence and trade under a social contract of mutual benefit is the right goal. The question is whether it’s possible to extend that beyond Homo sapiens.

If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm? The universe is physically large enough if we can agree to not all be the same and be fine with that.

I think we have a while to figure it out. These things are just lossy compressed blobs of queryable data so far. They have no independent will or self reflection and I’m not sure we have any idea how to do that. We’re not even sure it’s possible in a digital deterministic medium.

replies(1): >>concor+0C1
◧◩◪
60. xpe+ol1[view] [source] [discussion] 2024-05-18 16:45:37
>>portao+dE
Oh, it is that simple? What do you mean?

Are you saying these so-called simple intentions are the only factors in play? Surely not.

Are you putting forth a theory that we can test? How well do you think your theory works? Did it work for Enron? For Microsoft? For REI? Does it work for every organization? Surely not perfectly; therefore, it can't be as simple as you claim.

Making a simplification and calling it "simple" is an easy thing to do.

◧◩◪◨⬒⬓⬔
61. Andrew+Ro1[view] [source] [discussion] 2024-05-18 17:25:08
>>comp_t+vf
Hmm maybe I’m misremembering then

I do recall there was some recantation or otherwise distancing from CEV not long after he posted it, but frankly it was long ago enough that my memories might be getting mixed

What was the other one?

◧◩◪◨⬒⬓⬔⧯▣
62. Feepin+uy1[view] [source] [discussion] 2024-05-18 18:47:08
>>freeho+fc1
As a doomer, I think that's a bad analogy because I want it to happen if we succeed at aligned AGI. It's not doom behavior, it's just correct behavior.

Of course, I hope to be uploaded to the WIP dyson swarm around the sun at this point.

(Doomers are, broadly, singularitarians who went "wait, hold on actually.")

◧◩◪◨⬒⬓
63. concor+0C1[view] [source] [discussion] 2024-05-18 19:17:56
>>api+yg1
> If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm?

Can the Etoro practice child buggery and the Spartans infanticide and the Canadians abortion? Can the modern Germans stop siblings reared apart from having sex and the Germans from 80 years stop the disabled having sex? Can the Americans practice circumcision and the Somali's FGM?

Libertarianism is all well and good in theory, except no one can agree quite where the other guy's nose ends or even who counts as a person.

replies(1): >>api+zY1
◧◩◪◨⬒⬓
64. wruza+qD1[view] [source] [discussion] 2024-05-18 19:31:34
>>Likely+mO
Thanks, this explains the feeling I had after reading it (but was too shy to express).
◧◩◪◨⬒
65. refulg+2Q1[view] [source] [discussion] 2024-05-18 21:15:40
>>Davidz+p9
Yes

Incredulous reactions don't aid whatever you intend to communicate - there's a reason why everyone knows what AI the last 12 months, it's not made up or a monoculture. It would be very odd to expect discontinuation of commercial use without a black swan event

◧◩◪◨⬒⬓⬔
66. api+zY1[view] [source] [discussion] 2024-05-18 22:42:45
>>concor+0C1
Those are mostly behaviors that violate others autonomy or otherwise do harm, and prohibiting those is what I meant by a social contract.

It’s really a pretty narrow spectrum of behaviors: killing, imprisoning, robbing, various types of bodily autonomy violation. There are some edge cases and human specific things in there but not a lot. Most of them have to do with sex which is a peculiarly human thing anyway. I don’t think we are getting creepy perv AIs (unless we train them on 4chan and Urban Dictionary).

My point isn’t that there are no possible areas of conflict. My point is that I don’t think you need a huge amount of alignment if alignment implies sameness. You just need to deal with the points of conflict which do occur which are actually a very small and limited subset of available behaviors.

Humans have literally billions of customs and behaviors that don’t get anywhere near any of that stuff. You don’t need to even care about the vast majority of the behavior space.

[go to top]