zlacker

[parent] [thread] 5 comments
1. xcv123+(OP)[view] [source] 2023-11-18 23:53:38
> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?

replies(2): >>Meekro+t4 >>curtis+la
2. Meekro+t4[view] [source] 2023-11-19 00:14:13
>>xcv123+(OP)
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
replies(1): >>xcv123+Im
3. curtis+la[view] [source] 2023-11-19 00:50:26
>>xcv123+(OP)
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
replies(1): >>chasd0+2h
◧◩
4. chasd0+2h[view] [source] [discussion] 2023-11-19 01:41:08
>>curtis+la
> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?

replies(1): >>curtis+ra1
◧◩
5. xcv123+Im[view] [source] [discussion] 2023-11-19 02:16:00
>>Meekro+t4
Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.
◧◩◪
6. curtis+ra1[view] [source] [discussion] 2023-11-19 09:24:41
>>chasd0+2h
Buggy in a way that harms unintended targets, yes.
[go to top]