zlacker

[parent] [thread] 8 comments
1. ganzuu+(OP)[view] [source] 2024-05-15 14:40:29
Say I have intelligence x and a superintelligence is 10x, then I get stuck at local minima that the 10x is able to get out of. To me, the local minima looked "good", so if I see the 10x get out of my "good" then most likely I'm looking at something that appears to me to be "evil" even if that is just my limited perspective.

It's one hell of a problem.

replies(3): >>buckle+Z2 >>esmeva+97 >>camgun+NS
2. buckle+Z2[view] [source] 2024-05-15 14:54:17
>>ganzuu+(OP)
Your words sound like something from a joke: Human: How to achieve human peace. AI: Eliminate all humans.
replies(1): >>ganzuu+H5
◧◩
3. ganzuu+H5[view] [source] [discussion] 2024-05-15 15:06:52
>>buckle+Z2
Well I know I'm not good at explaining what I mean. Please do ask what I should clarify.
replies(1): >>buckle+Sn2
4. esmeva+97[view] [source] 2024-05-15 15:12:28
>>ganzuu+(OP)
Short response:

I agree it's a problem but it isn't incumbent on the 'x' peers to solve it. The burden of that goes to any supposed '10x'.

Long version:

I agree with you, though I would add that a superintellect at '10x' that couldn't look at the 'x' baseline of those around it and navigate that in an effective way (in other words, couldn't organize its thoughts and present them in a safe or good seeming way), is just plain not going to ever function at a '10x' level sustainably in an ecosystem full of normal 'x' peers.

I think the whole point of Stranger in a Strange Land is about this. The Martian is (generally) not only completely ascendant, he's also incredibly effective at leveraging his ascendancy. Repeatedly, characters who find him abhorrent at a distance chill out as they begin to grok him.

The reality is that this is an ecosystem of normal 'x' peers and the '10x', as the abnormality, needs to have "functional and effective in an ecosystem of 'x' peers" as part of its core skill set, or else none of us (not even the '10x' itself) can never recognize or utilize its supposed '10x' capacity.

replies(1): >>ganzuu+Qb
◧◩
5. ganzuu+Qb[view] [source] [discussion] 2024-05-15 15:35:05
>>esmeva+97
That's what I meant, once you apply what happens in practice to the theory. It's a response to a comment about ego and cults so I tried to be as political as I can... which just isn't sufficient. My entire premise is that this subject is something familiar and controversial in a new guise so there is going to be a lot of knee-jerk reactions as soon as you bring up something that looks like a pain-point.

For reference, I think most of us are '10x' in a particular field and that is our talent. Society-in-scarcity rewards talents unequally so we get status and ego resulting in a host of dark patterns. I think AI can ease scarcity so I keep betting on this horse for solving the real problem, which is ego.

6. camgun+NS[view] [source] 2024-05-15 19:03:14
>>ganzuu+(OP)
To focus on something I don't think gets a lot of play:

> To me, the local minima looked "good"

AI's entire business [0] is generating high quality digital content for free, but we've never ever ever needed help "generating content". For millennia we've sung songs and told stories, and we were happy with the media the entire time. If we'd never invented Tivo we'd be completely happy with linear TV. If we'd never invented TV we'd be completely happy with the radio. If we'd never invented the the CD we'd be completely happy with tapes. At every local minima of media, humanity has been super satisfied. Even if it were a problem, it's nowhere near the top of the list. We don't need more AI-generated news articles, music, movies, photos, illustrations, websites, instant summaries of research papers, (very very bad) singing. No one's looking around saying, "God there's just not enough pictures of fake waves crashing against a fake cliff". We need help with stuff like diseases and climate change. We need to figure out fusion, and it would be pretty cool if we could build the replicator (I am absolutely serious about the replicator). I remember a quote from long ago, someone saying something like, "it's lamentable that the greatest minds of my generation are focused 100% on getting more eyeballs on more ads". Well, here we are again (still?).

So why do we get wave after wave of companies doing this? Advances in this area are insanely popular and create instant dissatisfaction with the status quo. Suddenly radio is what your parents listened to, fast-forwarding a cassette is super tedious, not having instant access to every episode of every show feels deeply limiting, etc. There's tremendous profits to be had here.

You might be thinking, "here we go again, another 'capitalism just exploits humanity's bugs' rant", which of course I always have at the ready, but I want to make a different point here. For a while now the rich world has been _OK_. We reached an equilibrium where our agonies are almost purely aesthetic: "what kind of company do I want to work for", "what's the best air quality monitor", "should I buy a Framework on a lark and support a company doing something I believe in or do the obvious thing and buy an MBP", "how can I justify buying the biggest lawnmower possible", etc. Barring some big dips we've been here since the 80s, and now our culture just gasps from one "this changes everything" cigarette to the next. Is it Atari? Is it Capcom? Is it IMAX? Is it the Unreal Engine? Is it Instagram? Is it AI? Is it the Internet? Is it smartphones? Is it Web 2.0? Is it self-driving cars? Is it crypto? Is it the Metaverse and AR/VR headsets? I think us in the know wince whenever people make the leap from crypto to AI and say it's just the latest Silicon Valley scam--it's definitely not the same. But the truth in that comparison is that it is just the next fix, we the dealers and American culture the junkies in a codependent catastrophe of trillions wasted when like, HTML4 was absolutely fine. Flip phones, email, 1080p, all totally fine.

There is peace in realizing you have enough [1]. There is beauty and discovery in doing things that, sure, AI could do, but you can also do. There is joy in other humans. People listening to Hall & Oates on Walkmans teaching kids Spanish were just as happy (actually, probably a lot happier) as you are, and assuredly happier than you will be in a Wall-E future where 90% of your interactions are with an AI because no human wants to interact with any other human, and we've all decided we're too good to make food for each other or teach each other's kids algebra. It is miserable, the absolute definition of misery: in a mad craze to maximize our joy we have imprisoned ourselves in a joyless, desolate digital wasteland full of everything we can imagine, and nothing we actually want.

[0]: I'm sure there's infinite use cases people can come up with where AI isn't just generating a six fingered girlfriend that tricks you into loving her and occasionally tells you how great you would look in adidas Sambas. These are all more cases where tech wants humanity to adapt to the thing it built (cf. self-driving cars) rather than build a thing useful to humanity now. A good example is language learning: we don't have enough language tutors, so we'll close the gap with AI. Except teaching is a beautiful, unique, enriching experience, and the only reason we don't have enough teachers is that we treat them like dirt. It would have been better to spend the billions we spent on AI training more teachers and paying them more money. Etc. etc. etc.

[1]: https://www.themarginalian.org/2014/01/16/kurt-vonnegut-joe-...

replies(2): >>xarope+xV1 >>ganzuu+u62
◧◩
7. xarope+xV1[view] [source] [discussion] 2024-05-16 03:37:54
>>camgun+NS
This is a great post.

I'd like to tack onto your mention of teaching. I have found teaching really pushes me to understand the subject. It would be sad to lose this ability to have "real" teachers, if everything goes to AI.

◧◩
8. ganzuu+u62[view] [source] [discussion] 2024-05-16 06:08:08
>>camgun+NS
That is an interesting take on local minima.

Teachers are hopefully empowered by AI to better adapt to the needs of the students.

◧◩◪
9. buckle+Sn2[view] [source] [discussion] 2024-05-16 10:13:13
>>ganzuu+H5
I was just joking. I understand your point. 10X smarter human or AI can find global optimal solutions while ensuring local optimal solutions. Of course, the answer found by 10X smarter guy should not harm the current interests of humanity.
[go to top]