zlacker

[parent] [thread] 4 comments
1. rdtsc+(OP)[view] [source] 2023-11-20 08:51:18
> OpenAI's ideas of humanities best interests were like a catholic mom's

How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.

replies(1): >>ric2b+w2
2. ric2b+w2[view] [source] 2023-11-20 09:04:22
>>rdtsc+(OP)
They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
replies(1): >>SpicyL+x3
◧◩
3. SpicyL+x3[view] [source] [discussion] 2023-11-20 09:11:57
>>ric2b+w2
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
replies(2): >>timeon+e7 >>ric2b+Eb
◧◩◪
4. timeon+e7[view] [source] [discussion] 2023-11-20 09:33:12
>>SpicyL+x3
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

◧◩◪
5. ric2b+Eb[view] [source] [discussion] 2023-11-20 10:00:11
>>SpicyL+x3
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
[go to top]