zlacker

[parent] [thread] 9 comments
1. ben_w+(OP)[view] [source] 2023-07-05 22:37:12
You're seeing what you want to see.

They're repeatedly very specific about the whole "this can kill all of us if we do it wrong", so it's more than a little churlish to parrot the name "ClosedAI" when they're announcing hiring a researcher to figure out how to align with anyone, at all, even in principle.

replies(1): >>junon+Xc
2. junon+Xc[view] [source] 2023-07-05 23:56:15
>>ben_w+(OP)
I'm still a bit hung up on "it can kill all of us". How?
replies(3): >>ctoth+Rx >>ben_w+a71 >>gre345+vf1
◧◩
3. ctoth+Rx[view] [source] [discussion] 2023-07-06 02:29:26
>>junon+Xc
Here's an article[0] and a good short story[1] explaining exactly this.

[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...

[1]: It Looks Like You're Trying To Take Over The World https://gwern.net/fiction/clippy

replies(1): >>junon+j71
◧◩
4. ben_w+a71[view] [source] [discussion] 2023-07-06 07:25:15
>>junon+Xc
Did you ever play the old "Pandemic" flash game? https://tvtropes.org/pmwiki/pmwiki.php/VideoGame/Pandemic

That the origin of COVID is even a question implies we have the tech to do it artificially. An AI today treating real life as that game would be self-destructive, but that doesn't mean it won't happen (reference classes: insanity, cancer).

If the AI can invent and order a von Neumann probe — the first part is the hard part, custom parts orders over the internet is already a thing — that it can upload itself to, then it can block out (and start disassembling) the sun in a matter of decades with reasonable-looking reproduction rates (though obviously we're guessing what "reasonable" looks like as we have only organic VN machines to frame the question against).

AI taking over brain implants and turning against everyone without them like a zombie war (potentially Neuralink depending on how secure the software is, and also a plot device in web fiction serial The Deathworlders, futuristic sci-fi and you may not be OK with sci-fi as a way to explore hypotheticals, but I think it's the only way until we get moon-sized telescopes to watch such things play out on other worlds without going there; in that story the same AI genocides multiple species over millions of years as an excuse for why humans can even take part in the events of the story).

replies(1): >>antifa+Az1
◧◩◪
5. junon+j71[view] [source] [discussion] 2023-07-06 07:26:40
>>ctoth+Rx
The clippy example already starts out with many assumptions that simply aren't true today.

LLMs are not going to destroy humanity. We need a paradigm shift and a new model for AI for that to happen. ClosedAI is irresponsibly trying to create hype and mystery around their product, which always sells.

replies(1): >>ben_w+Jc1
◧◩◪◨
6. ben_w+Jc1[view] [source] [discussion] 2023-07-06 08:14:16
>>junon+j71
Will you please stop calling them ClosedAI? That just comes across like a playground taunt, like "libtard" or "CONservative".
replies(1): >>junon+7l5
◧◩
7. gre345+vf1[view] [source] [discussion] 2023-07-06 08:37:14
>>junon+Xc
It's like murder but scaled up.
◧◩◪
8. antifa+Az1[view] [source] [discussion] 2023-07-06 11:25:04
>>ben_w+a71
To be realistic, I'm more worried about McDonald's or Musk doing all of that.
replies(1): >>ben_w+ph2
◧◩◪◨
9. ben_w+ph2[view] [source] [discussion] 2023-07-06 14:54:54
>>antifa+Az1
They're an easy reference class, to be sure, though not the only one.

If I'm framing the problem for an anti-capitalist audience, I'd ask something like:

Imagine the billionaire you hate the most. The worst of them. Now give them a highly competent sycophant that doesn't need to sleep, and will do even their most insane requests without question or remorse…

what can go wrong in this scenario?

◧◩◪◨⬒
10. junon+7l5[view] [source] [discussion] 2023-07-07 08:09:41
>>ben_w+Jc1
I'll call them what they are. Closed and antithetical to their original goals.
[go to top]