zlacker

[parent] [thread] 4 comments
1. FinchN+(OP)[view] [source] 2024-05-15 02:34:11
What? How is this not saying "Well, it might be in the best interests of humanity for OpenAI to do [hypothetical thing that seems pretty bad that OpenAI has never suggested to do], and because they may consider doing said thing, we shouldn't trust them"?
replies(1): >>robbom+p1
2. robbom+p1[view] [source] 2024-05-15 02:49:44
>>FinchN+(OP)
I think OP is just pointing out that "acting in the best interests of humanity" is fairly ambiguous and leaves enough room for interpretation and spin to cover any number of sins.
replies(3): >>FabHK+D4 >>xyzzy1+J4 >>FinchN+Qj
◧◩
3. FabHK+D4[view] [source] [discussion] 2024-05-15 03:28:06
>>robbom+p1
Like the effective altruists bought themselves a castle with SBF's money - in the best interests of humanity, obviously.
◧◩
4. xyzzy1+J4[view] [source] [discussion] 2024-05-15 03:29:59
>>robbom+p1
If we can't even align OpenAI the organisation full of humans then I'm not sure how well AI alignment can possibly go...
◧◩
5. FinchN+Qj[view] [source] [discussion] 2024-05-15 06:28:44
>>robbom+p1
Okay this is reasonable, thanks for clarifying
[go to top]