zlacker

[parent] [thread] 26 comments
1. ganzuu+(OP)[view] [source] 2024-05-15 14:26:26
I bet superalignment is indistinguishable from religion (the spiritual, not manipulative kind), so proponents get frequency-pulled into the well-established cult leader pipeline. It's a quagmire to navigate so we can't have both open and enlightening discussions about what is going on.
replies(4): >>uLogMi+p >>dontup+e4 >>marric+y6 >>adverb+pd
2. uLogMi+p[view] [source] 2024-05-15 14:28:22
>>ganzuu+(OP)
I thought the whole point of making a transparent organization to lead the charge on AI was so that we could prevent this sort of ego and the other risks that come with.
replies(2): >>alephn+F2 >>ganzuu+R2
◧◩
3. alephn+F2[view] [source] [discussion] 2024-05-15 14:39:58
>>uLogMi+p
Nonprofits are not really that transparent, and do bend to the will of donors, who themselves try to limit transparency.

That's why Private Foundations are more popular than Public Charities even though both are 501c3 organizations, because they don't need to provide transparency into their operations.

replies(1): >>uLogMi+n3
◧◩
4. ganzuu+R2[view] [source] [discussion] 2024-05-15 14:40:29
>>uLogMi+p
Say I have intelligence x and a superintelligence is 10x, then I get stuck at local minima that the 10x is able to get out of. To me, the local minima looked "good", so if I see the 10x get out of my "good" then most likely I'm looking at something that appears to me to be "evil" even if that is just my limited perspective.

It's one hell of a problem.

replies(3): >>buckle+Q5 >>esmeva+0a >>camgun+EV
◧◩◪
5. uLogMi+n3[view] [source] [discussion] 2024-05-15 14:43:04
>>alephn+F2
I was more referencing the name and origin intentions than the non-profit status :)
6. dontup+e4[view] [source] 2024-05-15 14:46:53
>>ganzuu+(OP)
>Frequency-pulled

You mean like injection locking with oscillators? Or is this a new term in the tweetosphere

replies(1): >>ganzuu+ha
◧◩◪
7. buckle+Q5[view] [source] [discussion] 2024-05-15 14:54:17
>>ganzuu+R2
Your words sound like something from a joke: Human: How to achieve human peace. AI: Eliminate all humans.
replies(1): >>ganzuu+y8
8. marric+y6[view] [source] 2024-05-15 14:58:11
>>ganzuu+(OP)
It's also making sure AI is aligned with "our" intent and that "our" is a board made up of large corporations.

If AI did run away and do it's own thing (seems super unlikely) it's probably a crapshoot as to whether what it does is worse than the environmental apocalypse we live in where the rich continue to get richer and the poor poorer.

replies(1): >>ben_w+Vc
◧◩◪◨
9. ganzuu+y8[view] [source] [discussion] 2024-05-15 15:06:52
>>buckle+Q5
Well I know I'm not good at explaining what I mean. Please do ask what I should clarify.
replies(1): >>buckle+Jq2
◧◩◪
10. esmeva+0a[view] [source] [discussion] 2024-05-15 15:12:28
>>ganzuu+R2
Short response:

I agree it's a problem but it isn't incumbent on the 'x' peers to solve it. The burden of that goes to any supposed '10x'.

Long version:

I agree with you, though I would add that a superintellect at '10x' that couldn't look at the 'x' baseline of those around it and navigate that in an effective way (in other words, couldn't organize its thoughts and present them in a safe or good seeming way), is just plain not going to ever function at a '10x' level sustainably in an ecosystem full of normal 'x' peers.

I think the whole point of Stranger in a Strange Land is about this. The Martian is (generally) not only completely ascendant, he's also incredibly effective at leveraging his ascendancy. Repeatedly, characters who find him abhorrent at a distance chill out as they begin to grok him.

The reality is that this is an ecosystem of normal 'x' peers and the '10x', as the abnormality, needs to have "functional and effective in an ecosystem of 'x' peers" as part of its core skill set, or else none of us (not even the '10x' itself) can never recognize or utilize its supposed '10x' capacity.

replies(1): >>ganzuu+He
◧◩
11. ganzuu+ha[view] [source] [discussion] 2024-05-15 15:13:51
>>dontup+e4
Injection locking. This: https://www.youtube.com/watch?v=e-c6S6SdkPo

I mean it hides nuance in conversation.

◧◩
12. ben_w+Vc[view] [source] [discussion] 2024-05-15 15:25:34
>>marric+y6
It can only be "super unlikely" for an AI to "run away and do it's own thing" when we actually know how to align it.

Which we don't.

So we're not aligning it with corporate boards yet, though not for lack of trying.

(While LLMs are not directly agents, they are easy enough to turn into agents, and there's plenty of people willing to do that and disregard any concerns about the wisdom of this).

So yes, the crapshoot is exactly what everyone in AI alignment is trying to prevent.

(There's also, confusingly, "AI safety", which includes alignment but also covers things like misuse, social responsibility, and so on)

replies(1): >>root_a+hh
13. adverb+pd[view] [source] 2024-05-15 15:28:49
>>ganzuu+(OP)
I think you're on to something, but to me it has more to do with being part of the set of issues that intersect political policy and ethics. I see it as facing the same "discourse challenges" as:

abortion

animal cruelty laws/veganism/vegetarianism

affirmative action

climate change(denial)

These are legitimate issues, but it is also totally possible to just "ignore" them and pretend like they don't exist.

replies(1): >>ganzuu+7h
◧◩◪◨
14. ganzuu+He[view] [source] [discussion] 2024-05-15 15:35:05
>>esmeva+0a
That's what I meant, once you apply what happens in practice to the theory. It's a response to a comment about ego and cults so I tried to be as political as I can... which just isn't sufficient. My entire premise is that this subject is something familiar and controversial in a new guise so there is going to be a lot of knee-jerk reactions as soon as you bring up something that looks like a pain-point.

For reference, I think most of us are '10x' in a particular field and that is our talent. Society-in-scarcity rewards talents unequally so we get status and ego resulting in a host of dark patterns. I think AI can ease scarcity so I keep betting on this horse for solving the real problem, which is ego.

◧◩
15. ganzuu+7h[view] [source] [discussion] 2024-05-15 15:45:37
>>adverb+pd
This time we have a genie in a lamp which will not be ignored. This should mean that a previously unknown variable is now set to "true" so discussion is more focused on reality.

However the paranoid part of me says that these crises and wars are just for the sake of letting people continue to ignore the truly difficult questions.

◧◩◪
16. root_a+hh[view] [source] [discussion] 2024-05-15 15:46:02
>>ben_w+Vc
"Run away" AI is total science fiction - i.e, not anything happening in the foreseeable future. That's simply not how these systems work. Any looming AI threat will be entirely the result of deliberate human actions.
replies(1): >>ben_w+sq
◧◩◪◨
17. ben_w+sq[view] [source] [discussion] 2024-05-15 16:25:21
>>root_a+hh
We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.

And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.

replies(1): >>root_a+2z
◧◩◪◨⬒
18. root_a+2z[view] [source] [discussion] 2024-05-15 17:04:49
>>ben_w+sq
> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.

replies(1): >>ben_w+GI
◧◩◪◨⬒⬓
19. ben_w+GI[view] [source] [discussion] 2024-05-15 17:51:39
>>root_a+2z
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

replies(1): >>root_a+kJ
◧◩◪◨⬒⬓⬔
20. root_a+kJ[view] [source] [discussion] 2024-05-15 17:54:41
>>ben_w+GI
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
replies(1): >>ben_w+zM
◧◩◪◨⬒⬓⬔⧯
21. ben_w+zM[view] [source] [discussion] 2024-05-15 18:12:17
>>root_a+kJ
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+x41 >>8note+yb1
◧◩◪
22. camgun+EV[view] [source] [discussion] 2024-05-15 19:03:14
>>ganzuu+R2
To focus on something I don't think gets a lot of play:

> To me, the local minima looked "good"

AI's entire business [0] is generating high quality digital content for free, but we've never ever ever needed help "generating content". For millennia we've sung songs and told stories, and we were happy with the media the entire time. If we'd never invented Tivo we'd be completely happy with linear TV. If we'd never invented TV we'd be completely happy with the radio. If we'd never invented the the CD we'd be completely happy with tapes. At every local minima of media, humanity has been super satisfied. Even if it were a problem, it's nowhere near the top of the list. We don't need more AI-generated news articles, music, movies, photos, illustrations, websites, instant summaries of research papers, (very very bad) singing. No one's looking around saying, "God there's just not enough pictures of fake waves crashing against a fake cliff". We need help with stuff like diseases and climate change. We need to figure out fusion, and it would be pretty cool if we could build the replicator (I am absolutely serious about the replicator). I remember a quote from long ago, someone saying something like, "it's lamentable that the greatest minds of my generation are focused 100% on getting more eyeballs on more ads". Well, here we are again (still?).

So why do we get wave after wave of companies doing this? Advances in this area are insanely popular and create instant dissatisfaction with the status quo. Suddenly radio is what your parents listened to, fast-forwarding a cassette is super tedious, not having instant access to every episode of every show feels deeply limiting, etc. There's tremendous profits to be had here.

You might be thinking, "here we go again, another 'capitalism just exploits humanity's bugs' rant", which of course I always have at the ready, but I want to make a different point here. For a while now the rich world has been _OK_. We reached an equilibrium where our agonies are almost purely aesthetic: "what kind of company do I want to work for", "what's the best air quality monitor", "should I buy a Framework on a lark and support a company doing something I believe in or do the obvious thing and buy an MBP", "how can I justify buying the biggest lawnmower possible", etc. Barring some big dips we've been here since the 80s, and now our culture just gasps from one "this changes everything" cigarette to the next. Is it Atari? Is it Capcom? Is it IMAX? Is it the Unreal Engine? Is it Instagram? Is it AI? Is it the Internet? Is it smartphones? Is it Web 2.0? Is it self-driving cars? Is it crypto? Is it the Metaverse and AR/VR headsets? I think us in the know wince whenever people make the leap from crypto to AI and say it's just the latest Silicon Valley scam--it's definitely not the same. But the truth in that comparison is that it is just the next fix, we the dealers and American culture the junkies in a codependent catastrophe of trillions wasted when like, HTML4 was absolutely fine. Flip phones, email, 1080p, all totally fine.

There is peace in realizing you have enough [1]. There is beauty and discovery in doing things that, sure, AI could do, but you can also do. There is joy in other humans. People listening to Hall & Oates on Walkmans teaching kids Spanish were just as happy (actually, probably a lot happier) as you are, and assuredly happier than you will be in a Wall-E future where 90% of your interactions are with an AI because no human wants to interact with any other human, and we've all decided we're too good to make food for each other or teach each other's kids algebra. It is miserable, the absolute definition of misery: in a mad craze to maximize our joy we have imprisoned ourselves in a joyless, desolate digital wasteland full of everything we can imagine, and nothing we actually want.

[0]: I'm sure there's infinite use cases people can come up with where AI isn't just generating a six fingered girlfriend that tricks you into loving her and occasionally tells you how great you would look in adidas Sambas. These are all more cases where tech wants humanity to adapt to the thing it built (cf. self-driving cars) rather than build a thing useful to humanity now. A good example is language learning: we don't have enough language tutors, so we'll close the gap with AI. Except teaching is a beautiful, unique, enriching experience, and the only reason we don't have enough teachers is that we treat them like dirt. It would have been better to spend the billions we spent on AI training more teachers and paying them more money. Etc. etc. etc.

[1]: https://www.themarginalian.org/2014/01/16/kurt-vonnegut-joe-...

replies(2): >>xarope+oY1 >>ganzuu+l92
◧◩◪◨⬒⬓⬔⧯▣
23. root_a+x41[view] [source] [discussion] 2024-05-15 19:49:38
>>ben_w+zM
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
◧◩◪◨⬒⬓⬔⧯▣
24. 8note+yb1[view] [source] [discussion] 2024-05-15 20:26:08
>>ben_w+zM
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
◧◩◪◨
25. xarope+oY1[view] [source] [discussion] 2024-05-16 03:37:54
>>camgun+EV
This is a great post.

I'd like to tack onto your mention of teaching. I have found teaching really pushes me to understand the subject. It would be sad to lose this ability to have "real" teachers, if everything goes to AI.

◧◩◪◨
26. ganzuu+l92[view] [source] [discussion] 2024-05-16 06:08:08
>>camgun+EV
That is an interesting take on local minima.

Teachers are hopefully empowered by AI to better adapt to the needs of the students.

◧◩◪◨⬒
27. buckle+Jq2[view] [source] [discussion] 2024-05-16 10:13:13
>>ganzuu+y8
I was just joking. I understand your point. 10X smarter human or AI can find global optimal solutions while ensuring local optimal solutions. Of course, the answer found by 10X smarter guy should not harm the current interests of humanity.
[go to top]