zlacker

[parent] [thread] 2 comments
1. SpicyL+(OP)[view] [source] 2023-11-20 09:11:57
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
replies(2): >>timeon+H3 >>ric2b+78
2. timeon+H3[view] [source] 2023-11-20 09:33:12
>>SpicyL+(OP)
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

3. ric2b+78[view] [source] 2023-11-20 10:00:11
>>SpicyL+(OP)
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
[go to top]