zlacker

[parent] [thread] 4 comments
1. ben_w+(OP)[view] [source] 2024-05-15 17:51:39
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

replies(1): >>root_a+E
2. root_a+E[view] [source] 2024-05-15 17:54:41
>>ben_w+(OP)
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
replies(1): >>ben_w+T3
◧◩
3. ben_w+T3[view] [source] [discussion] 2024-05-15 18:12:17
>>root_a+E
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+Rl >>8note+Ss
◧◩◪
4. root_a+Rl[view] [source] [discussion] 2024-05-15 19:49:38
>>ben_w+T3
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
◧◩◪
5. 8note+Ss[view] [source] [discussion] 2024-05-15 20:26:08
>>ben_w+T3
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
[go to top]