zlacker

[parent] [thread] 3 comments
1. crazyg+(OP)[view] [source] 2024-05-20 23:13:43
Seriously. This is utterly baffling to me.

OpenAI is trying to demonstrate how it's so trustworthy, and is always talking about how important it is to be trustworthy when it comes to something as important and potentially dangerous as AI.

And then they do something like this...??

I literally don't understand how they could be this dumb. Do they not have a lawyer? Or do they not tell their corporate counsel what they're up to? Or just ignore the counsel when they do?

replies(2): >>steveB+m >>prawn+7e
2. steveB+m[view] [source] 2024-05-20 23:15:41
>>crazyg+(OP)
Also retired the entire safety team in the same weak too, lol.
replies(1): >>bigiai+16
◧◩
3. bigiai+16[view] [source] [discussion] 2024-05-20 23:51:11
>>steveB+m
I wonder howe much their anti disparagement clauses are about covering up how this went down internally?
4. prawn+7e[view] [source] 2024-05-21 00:42:07
>>crazyg+(OP)
Especially considering that the advantage gained by having an AI sound like this one individual is absolutely minimal. It's not as though any significant portion of a target market is going to throw a tantrum, saying "No, no, I refuse to accept this simulated companionship unless it sounds exactly like the voice in that one particular movie several years ago." Baffling that the company didn't recognise the risk here and retire that voice as soon as they were turned down the first time.
[go to top]