zlacker

[parent] [thread] 3 comments
1. llm_tr+(OP)[view] [source] 2024-05-15 15:30:39
Yes, not openai's.
replies(3): >>woopsn+Z5 >>dns_sn+G8 >>swat53+Cr
2. woopsn+Z5[view] [source] 2024-05-15 15:57:26
>>llm_tr+(OP)
"Our primary fiduciary duty is to humanity."
3. dns_sn+G8[view] [source] 2024-05-15 16:07:46
>>llm_tr+(OP)
It depends. I don't think OpenAI (or anyone else selling products to the general audience) should be forced to make their products so safe that they can't possibly harm anyone under any circumstance. That's just going to make the product useless (like many LLMs currently are, depending on the topic). However that's a very different standard than the original comment which stated:

> I suspect Altman/Brockman/Murati intended for this thing to be dangerous for mentally unwell users, using the exact same logic as tobacco companies.

Tobacco companies knew about the dangers of their products, and they purposefully downplayed them, manipulated research, and exploited addictive properties of their products for profit, which caused great harm to society.

Disclosing all known (or potential) dangers of your products and not purposefully exploiting society (psychologically, physiologically, financially, or otherwise) is a standard that every company should be forced to meet.

4. swat53+Cr[view] [source] 2024-05-15 17:35:23
>>llm_tr+(OP)
Corporations should benefit the society and avoid harming it in some shape or form, this is why we have regulations around them.
[go to top]