zlacker

[parent] [thread] 6 comments
1. ben_w+(OP)[view] [source] 2024-05-15 16:25:21
We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.

And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.

replies(1): >>root_a+A8
2. root_a+A8[view] [source] 2024-05-15 17:04:49
>>ben_w+(OP)
> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.

replies(1): >>ben_w+ei
◧◩
3. ben_w+ei[view] [source] [discussion] 2024-05-15 17:51:39
>>root_a+A8
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

replies(1): >>root_a+Si
◧◩◪
4. root_a+Si[view] [source] [discussion] 2024-05-15 17:54:41
>>ben_w+ei
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
replies(1): >>ben_w+7m
◧◩◪◨
5. ben_w+7m[view] [source] [discussion] 2024-05-15 18:12:17
>>root_a+Si
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+5E >>8note+6L
◧◩◪◨⬒
6. root_a+5E[view] [source] [discussion] 2024-05-15 19:49:38
>>ben_w+7m
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
◧◩◪◨⬒
7. 8note+6L[view] [source] [discussion] 2024-05-15 20:26:08
>>ben_w+7m
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
[go to top]