zlacker

[return to "Statement from Scarlett Johansson on the OpenAI "Sky" voice"]
1. anon37+t5[view] [source] 2024-05-20 22:58:41
>>mjcl+(OP)
Well, that statement lays out a damning timeline:

- OpenAI approached Scarlett last fall, and she refused.

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.

- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.

Perhaps Sam’s next tweet should read “red-handed”.

◧◩
2. nickth+R7[view] [source] 2024-05-20 23:10:38
>>anon37+t5
This statement from scarlet really changed my perspective. I use and loved the Sky voice and I did feel it sounded a little like her, but moreover it was the best of their voice offerings. I was mad when they removed it. But now I’m mad it was ever there to begin with. This timeline makes it clear that this wasn’t a coincidence and maybe not even a hiring of an impressionist (which is where things get a little more wishy washy for me).
◧◩◪
3. windex+qA[view] [source] 2024-05-21 02:43:47
>>nickth+R7
The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.

[0] https://www.businessinsider.com/sam-altman-openai-artificial...

◧◩◪◨
4. startu+pB[view] [source] 2024-05-21 02:53:28
>>windex+qA
Most likely it was an unforced error, as there’ve been a lot of chaos with cofounders and the board revolt, easy to loose track of something really minor.

Like some intern’s idea to train the voice on their favorite movie.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

This could be a well-planned opening move of a regulation gambit. But unlikely.

◧◩◪◨⬒
5. windex+9C[view] [source] 2024-05-21 03:00:52
>>startu+pB
I don't think this makes any sense, at all, quite honestly. Why would an "intern" be training one of ChatGPT's voices for a major release?

If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]

[0] https://x.com/gdb/status/1791869138132218351

[go to top]