zlacker

[parent] [thread] 7 comments
1. root_a+(OP)[view] [source] 2024-05-15 15:46:02
"Run away" AI is total science fiction - i.e, not anything happening in the foreseeable future. That's simply not how these systems work. Any looming AI threat will be entirely the result of deliberate human actions.
replies(1): >>ben_w+b9
2. ben_w+b9[view] [source] 2024-05-15 16:25:21
>>root_a+(OP)
We've already had robots "run away" into a water feature in one case and a pedestrian pushing a bike in another, the phrase doesn't only mean getting paperclipped.

And for non-robotic AI, also flash-crashes on the stock market and that thing with Amazon book pricing bots caught up in a reactive cycle that drove up prices for a book they didn't have.

replies(1): >>root_a+Lh
◧◩
3. root_a+Lh[view] [source] [discussion] 2024-05-15 17:04:49
>>ben_w+b9
> the phrase doesn't only mean getting paperclipped.

This is what most people mean when they say "run away", i.e. the machine behaves in a surreptitious way to do things it was never designed to do, not a catastrophic failure that causes harm because the AI did not perform reliably.

replies(1): >>ben_w+pr
◧◩◪
4. ben_w+pr[view] [source] [discussion] 2024-05-15 17:51:39
>>root_a+Lh
Every error is surreptitious to those who cannot predict the behaviour of a few billion matrix operations, which is most of us.

When people are not paying attention, they're just as dead if it's Therac-25 or Thule airforce base early warning radar or an actual paperclipper.

replies(1): >>root_a+3s
◧◩◪◨
5. root_a+3s[view] [source] [discussion] 2024-05-15 17:54:41
>>ben_w+pr
No. Surreptitious means done with deliberate stealth to conceal your actions, not a miscalculation that results in a failure.
replies(1): >>ben_w+iv
◧◩◪◨⬒
6. ben_w+iv[view] [source] [discussion] 2024-05-15 18:12:17
>>root_a+3s
To those who are dead, that's a distinction without a difference. So far as I'm aware, none of the "killer robots gone wrong" actual sci-fi starts with someone deliberately aiming to wipe themselves out, it's always a misspecification or an unintended consequence.

The fact that we don't know how to determine if there's a misspecification or an unintended consequence is the alignment problem.

replies(2): >>root_a+gN >>8note+hU
◧◩◪◨⬒⬓
7. root_a+gN[view] [source] [discussion] 2024-05-15 19:49:38
>>ben_w+iv
"Unintended consequences" has nothing to do with AI specifically, it's a problem endemic to every human system. The "alignment" problem has no meaning in today's AI landscape beyond whether or not your LLM will emit slurs.
◧◩◪◨⬒⬓
8. 8note+hU[view] [source] [discussion] 2024-05-15 20:26:08
>>ben_w+iv
To those who are dead, it doesn't matter if there was a human behind the wheel, or a matrix
[go to top]