zlacker

[parent] [thread] 2 comments
1. ben_w+(OP)[view] [source] 2024-05-15 18:12:17
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+Yh >>8note+Zo
2. root_a+Yh[view] [source] 2024-05-15 19:49:38
>>ben_w+(OP)
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
3. 8note+Zo[view] [source] 2024-05-15 20:26:08
>>ben_w+(OP)
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
[go to top]