zlacker

[return to "Statement from Scarlett Johansson on the OpenAI "Sky" voice"]
1. anon37+t5[view] [source] 2024-05-20 22:58:41
>>mjcl+(OP)
Well, that statement lays out a damning timeline:

- OpenAI approached Scarlett last fall, and she refused.

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.

- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.

Perhaps Sam’s next tweet should read “red-handed”.

◧◩
2. npunt+5a[view] [source] 2024-05-20 23:23:15
>>anon37+t5
When people cheat on (relatively) small things, it's usually an indication they'll cheat on big things too
◧◩◪
3. slg+4g[view] [source] 2024-05-20 23:58:40
>>npunt+5a
Which is what makes me wonder if this might grow into a galvanizing event for the pro-creator protests against these AI models and companies. What happened here isn't particularly unique to voices or even Scarlett Johansson, it is just how these companies and their products operate in general.
◧◩◪◨
4. bakuni+FW[view] [source] 2024-05-21 06:33:33
>>slg+4g
I think the only way for these protests to get really tangible results is in case we reach a ceiling in LLM capabilities. The technology in its current trajectory is simply too valuable both in economic and military applications to pull out of, and "overregulation" can be easily swatted citing national security concerns in regards to China. As far as I know, China has significantly stricter data and privacy regulations than the US when it comes to the private sector, but these probably count for little when it comes to the PLA.
◧◩◪◨⬒
5. andy_p+wl1[view] [source] 2024-05-21 10:08:34
>>bakuni+FW
We have almost run out of training data already so I’m not convinced they will get massively more generalised suddenly. If you give them reasoning tasks they haven’t seen before LLMs absolutely fall apart and produce essentially gibberish. They are currently search engines that give you one extremely good result that you can refine up to a point, they are not thinking even though there’s a little bit more understanding than search engines of the past.
[go to top]