zlacker

[parent] [thread] 6 comments
1. PKop+(OP)[view] [source] 2023-11-20 11:22:46
>it's not safety

Can you explain what is meant by the word safety?

Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?

replies(5): >>uberco+81 >>tsimio+n1 >>trepri+G3 >>sam345+cq >>nimish+YE1
2. uberco+81[view] [source] 2023-11-20 11:28:58
>>PKop+(OP)
In this context, I believe it's safety of releasing AI tools, and the impact they may have on society or unintentional harm they may cause.
3. tsimio+n1[view] [source] 2023-11-20 11:30:22
>>PKop+(OP)
In this context, this is about the idea of AI safety. This can either refer to the more short-term concerns about AI helping to spread misinformation (e.g. ChatGPT being used to churn out massive amounts of fake news) or implicit biases (e.g. "predictive policing" using AI to analyze crime data that ends up incarcerating minorities because of accidental biases in its training set). Or it can refer to the longer term fears about a super-human intelligence that would end up acting against humanity for various reasons, and efforts to create a super-human AI that would have the same moral goals as us (and the fear that a non-safe AGI could be accidentally created).

In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.

4. trepri+G3[view] [source] 2023-11-20 11:44:37
>>PKop+(OP)
"User: How to make an atomic bomb for $100?"

"AI: I am sorry, I can't provide this information."

replies(1): >>the_lo+G7
◧◩
5. the_lo+G7[view] [source] [discussion] 2023-11-20 12:14:20
>>trepri+G3
user: How to make a White Russian?

AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).

Or maybe more dystopian…

AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.

6. sam345+cq[view] [source] 2023-11-20 13:52:24
>>PKop+(OP)
The answers given confirm no one knows what it means. It is a nebulous term often meaning censorship. The question then becomes what type of censorship and who is deciding? So there inevitably will be a political bias. The other more practical meaning is what in the real world are we allowing AI to mechanically alter and what checks and balances are there? Coupled with the first concern it becomes a concern of mechanical real world changes driven by autonomous political bias. The same concerns we have of any person or corporation. But by regulating "safety" one is enforcing a homogeneous centralized mindset that not only influences but controls real world events and will be very hard to change even in a democratic society.
7. nimish+YE1[view] [source] 2023-11-20 19:07:12
>>PKop+(OP)
No one knows what it means, but it's provocative.

It's mainly about who is allowed to control what other people can do, i.e. power.

[go to top]