zlacker

[parent] [thread] 12 comments
1. voldac+(OP)[view] [source] 2023-07-05 22:15:29
>How do we ensure AI systems much smarter than humans follow human intent?

What is human intent? My intents may be very different from most humans. It seems like ClosedAI wants their system to follow the desires of some people and not others, but without describing which ones or why.

replies(3): >>ben_w+S3 >>rfergi+z4 >>aidenn+Ul
2. ben_w+S3[view] [source] 2023-07-05 22:37:12
>>voldac+(OP)
You're seeing what you want to see.

They're repeatedly very specific about the whole "this can kill all of us if we do it wrong", so it's more than a little churlish to parrot the name "ClosedAI" when they're announcing hiring a researcher to figure out how to align with anyone, at all, even in principle.

replies(1): >>junon+Pg
3. rfergi+z4[view] [source] 2023-07-05 22:41:03
>>voldac+(OP)
> It seems like ClosedAI wants their system to follow the desires of some people and not others, but without describing which ones or why

If the problem is "unaligned AI will destroy humanity" then I'd take a system aligned with the desires of some people but not others over the unaligned alternative

◧◩
4. junon+Pg[view] [source] [discussion] 2023-07-05 23:56:15
>>ben_w+S3
I'm still a bit hung up on "it can kill all of us". How?
replies(3): >>ctoth+JB >>ben_w+2b1 >>gre345+nj1
5. aidenn+Ul[view] [source] 2023-07-06 00:32:36
>>voldac+(OP)
Sure, there are some humans who want to destroy all of humanity, but that seems to be twisting the obvious meaning of "human intent" in this context.

They have been very clear on "why." The goal is to prevent enslavement and/or extinction of the human race by a super-intelligent AI.

◧◩◪
6. ctoth+JB[view] [source] [discussion] 2023-07-06 02:29:26
>>junon+Pg
Here's an article[0] and a good short story[1] explaining exactly this.

[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...

[1]: It Looks Like You're Trying To Take Over The World https://gwern.net/fiction/clippy

replies(1): >>junon+bb1
◧◩◪
7. ben_w+2b1[view] [source] [discussion] 2023-07-06 07:25:15
>>junon+Pg
Did you ever play the old "Pandemic" flash game? https://tvtropes.org/pmwiki/pmwiki.php/VideoGame/Pandemic

That the origin of COVID is even a question implies we have the tech to do it artificially. An AI today treating real life as that game would be self-destructive, but that doesn't mean it won't happen (reference classes: insanity, cancer).

If the AI can invent and order a von Neumann probe — the first part is the hard part, custom parts orders over the internet is already a thing — that it can upload itself to, then it can block out (and start disassembling) the sun in a matter of decades with reasonable-looking reproduction rates (though obviously we're guessing what "reasonable" looks like as we have only organic VN machines to frame the question against).

AI taking over brain implants and turning against everyone without them like a zombie war (potentially Neuralink depending on how secure the software is, and also a plot device in web fiction serial The Deathworlders, futuristic sci-fi and you may not be OK with sci-fi as a way to explore hypotheticals, but I think it's the only way until we get moon-sized telescopes to watch such things play out on other worlds without going there; in that story the same AI genocides multiple species over millions of years as an excuse for why humans can even take part in the events of the story).

replies(1): >>antifa+sD1
◧◩◪◨
8. junon+bb1[view] [source] [discussion] 2023-07-06 07:26:40
>>ctoth+JB
The clippy example already starts out with many assumptions that simply aren't true today.

LLMs are not going to destroy humanity. We need a paradigm shift and a new model for AI for that to happen. ClosedAI is irresponsibly trying to create hype and mystery around their product, which always sells.

replies(1): >>ben_w+Bg1
◧◩◪◨⬒
9. ben_w+Bg1[view] [source] [discussion] 2023-07-06 08:14:16
>>junon+bb1
Will you please stop calling them ClosedAI? That just comes across like a playground taunt, like "libtard" or "CONservative".
replies(1): >>junon+Zo5
◧◩◪
10. gre345+nj1[view] [source] [discussion] 2023-07-06 08:37:14
>>junon+Pg
It's like murder but scaled up.
◧◩◪◨
11. antifa+sD1[view] [source] [discussion] 2023-07-06 11:25:04
>>ben_w+2b1
To be realistic, I'm more worried about McDonald's or Musk doing all of that.
replies(1): >>ben_w+hl2
◧◩◪◨⬒
12. ben_w+hl2[view] [source] [discussion] 2023-07-06 14:54:54
>>antifa+sD1
They're an easy reference class, to be sure, though not the only one.

If I'm framing the problem for an anti-capitalist audience, I'd ask something like:

Imagine the billionaire you hate the most. The worst of them. Now give them a highly competent sycophant that doesn't need to sleep, and will do even their most insane requests without question or remorse…

what can go wrong in this scenario?

◧◩◪◨⬒⬓
13. junon+Zo5[view] [source] [discussion] 2023-07-07 08:09:41
>>ben_w+Bg1
I'll call them what they are. Closed and antithetical to their original goals.
[go to top]