zlacker

[return to "Statement from Scarlett Johansson on the OpenAI "Sky" voice"]
1. anon37+t5[view] [source] 2024-05-20 22:58:41
>>mjcl+(OP)
Well, that statement lays out a damning timeline:

- OpenAI approached Scarlett last fall, and she refused.

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.

- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.

Perhaps Sam’s next tweet should read “red-handed”.

◧◩
2. nickth+R7[view] [source] 2024-05-20 23:10:38
>>anon37+t5
This statement from scarlet really changed my perspective. I use and loved the Sky voice and I did feel it sounded a little like her, but moreover it was the best of their voice offerings. I was mad when they removed it. But now I’m mad it was ever there to begin with. This timeline makes it clear that this wasn’t a coincidence and maybe not even a hiring of an impressionist (which is where things get a little more wishy washy for me).
◧◩◪
3. windex+qA[view] [source] 2024-05-21 02:43:47
>>nickth+R7
The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.

[0] https://www.businessinsider.com/sam-altman-openai-artificial...

◧◩◪◨
4. startu+pB[view] [source] 2024-05-21 02:53:28
>>windex+qA
Most likely it was an unforced error, as there’ve been a lot of chaos with cofounders and the board revolt, easy to loose track of something really minor.

Like some intern’s idea to train the voice on their favorite movie.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

This could be a well-planned opening move of a regulation gambit. But unlikely.

◧◩◪◨⬒
5. mbrees+HD[view] [source] 2024-05-21 03:15:19
>>startu+pB
This is an unforced error, but it isn’t minor. It’s quite large and public.

The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.

But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.

◧◩◪◨⬒⬓
6. surfin+kX[view] [source] 2024-05-21 06:40:34
>>mbrees+HD
I had a conversation with someone responsible for introducing LLMs into the process that involves personal information. That person rejected my concern over one person's data appearing in the report on another person. He told me that it will be possible to train AI to avoid that. The rest of the conversation convinced me that AI is seen as magic that can do anything. It seems to me that we are seeing a split between those who don't understand it and fear it and those who don't understand it, but want to align themselves with it. Those latter are those I fear the most.
◧◩◪◨⬒⬓⬔
7. komboo+T71[view] [source] 2024-05-21 08:26:30
>>surfin+kX
The "AI is magic and we should simply believe" is even being actively promoted because all these VC hucksters need it.

Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.

It's becoming an extremely apparent bubble.

[go to top]