zlacker

[parent] [thread] 4 comments
1. threes+(OP)[view] [source] 2023-11-18 23:39:23
There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.

replies(3): >>wruza+7c >>Amezar+Re >>throwa+VV1
2. wruza+7c[view] [source] 2023-11-19 00:44:42
>>threes+(OP)
This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.
3. Amezar+Re[view] [source] 2023-11-19 01:01:38
>>threes+(OP)
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
replies(1): >>chasd0+xl
◧◩
4. chasd0+xl[view] [source] [discussion] 2023-11-19 01:53:40
>>Amezar+Re
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
5. throwa+VV1[view] [source] 2023-11-19 15:36:09
>>threes+(OP)
That's not really a great encapsulation of the AI safety that those who think AGI poses a thread to humanity are referring to.

The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.

[go to top]