zlacker

[parent] [thread] 5 comments
1. root_a+(OP)[view] [source] 2024-05-15 17:04:49
> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.

replies(1): >>ben_w+E9
2. ben_w+E9[view] [source] 2024-05-15 17:51:39
>>root_a+(OP)
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

replies(1): >>root_a+ia
◧◩
3. root_a+ia[view] [source] [discussion] 2024-05-15 17:54:41
>>ben_w+E9
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
replies(1): >>ben_w+xd
◧◩◪
4. ben_w+xd[view] [source] [discussion] 2024-05-15 18:12:17
>>root_a+ia
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+vv >>8note+wC
◧◩◪◨
5. root_a+vv[view] [source] [discussion] 2024-05-15 19:49:38
>>ben_w+xd
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
◧◩◪◨
6. 8note+wC[view] [source] [discussion] 2024-05-15 20:26:08
>>ben_w+xd
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
[go to top]