zlacker

[return to "OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show"]
1. skille+CM[view] [source] 2024-05-23 06:13:55
>>richar+(OP)
The thing that worried me initially was that:

- the original report by Scarlett said she was approached months ago, and then two days prior to launch of GPT-4o she was approached again

Because of the above, my immediate assumption was that OpenAI definitely did her dirty. But this report from WaPo debunks at least some of it, because the records they have seen show that the voice actor was contacted months in advance prior to OpenAI contacting Scarlett for the first time. (also goes to show just how many months in advance OpenAI is working on projects)

However, this does not dispel the fact that OpenAI did contact Scarlett, and Sam Altman did post the tweet saying "her", and the voice has at least "some" resemblance of Scarlett's voice, at least enough to have two different groups saying that it does, and the other saying that it does not.

◧◩
2. serial+KN[view] [source] 2024-05-23 06:21:44
>>skille+CM
I don't know, to me, it's just sounds like they know how to cover all their bases.

To me, it sounds like they had the idea to make their AI sound like "her". For the initial version, they had a voice actor that sounds like the movie, as a proof of concept.

They still liked it, so it was time to contact the real star. In the end, it's not just the voice, it would have been the brand, just imagine the buzz they would have got if Scarlett J was the official voice of the company. She said no, and they were like, "too bad, we already decided how she will sound like, the only difference is whether it will be labelled as SJ or not".

In the end, someone probably felt like it's a bit too dodgy as it resemblance was uncanny, they gave it another go, probably ready to offer more money, she still refused, but in the end, it didn't change a thing.

◧◩◪
3. gnicho+cO[view] [source] 2024-05-23 06:26:42
>>serial+KN
Agreed — seems like they had a plan, and probably talked extensively with Legal about how to develop and execute the plan to give themselves plausible deniability. The tweet was inadvisable, and undoubtedly not part of the actual plan (unless it was to get PR).
◧◩◪◨
4. safety+1U[view] [source] 2024-05-23 07:16:49
>>gnicho+cO
> unless it was to get PR

I think this possibility doesn't receive enough attention, there is a class of people who've figured out that they can say the most scandalous things online and it's a net positive because it generates so much exposure. (As a lowly middle class employee you can't do this - you just get fired and go broke - but at a certain level of wealth and power you're immune from that.) It is the old PT Barnum principle, "They can say whatever they want about me as long as they spell my name right." Guys like Trump and Musk know exactly what they're doing. Why wouldn't Sam?

Johansson's complaint is starting to look a little shaky especially if you remove that "her" Tweet from the equation. I wouldn't put this past Altman at all, he knows exactly what happened and what didn't inside OpenAI, so maybe he knew she didn't have a case and decided to play Sociopathic 3D Chess with her (and beat her in one round)

◧◩◪◨⬒
5. repeek+rX[view] [source] 2024-05-23 07:44:14
>>safety+1U
As a lowly middle class employee it could be interpreted externally as “representing the company” which is why you see disclaimers like “all views are my own” on some social media profiles. Sam is the company, so they can’t get mad at him, and beyond that he’s a private individual saying whatever he wants on social media without lying.

In order to sue, there need to be damages, and if they didn’t copy the voice then the rest doesn’t matter, which sam and team clearly knew and were fast to work with the news. I agree that smart people take advantage of what they can get away with, but this controversy couldn’t have turned out better for increasing brand awareness good or bad (as you say, just like trump and musk know how to do)

[go to top]