zlacker

[parent] [thread] 3 comments
1. ric2b+(OP)[view] [source] 2023-11-20 09:04:22
They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
replies(1): >>SpicyL+11
2. SpicyL+11[view] [source] 2023-11-20 09:11:57
>>ric2b+(OP)
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
replies(2): >>timeon+I4 >>ric2b+89
◧◩
3. timeon+I4[view] [source] [discussion] 2023-11-20 09:33:12
>>SpicyL+11
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

◧◩
4. ric2b+89[view] [source] [discussion] 2023-11-20 10:00:11
>>SpicyL+11
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
[go to top]