zlacker

[return to "OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show"]
1. jmull+P12[view] [source] 2024-05-23 15:22:46
>>richar+(OP)
Well, here are some things that aren't really being disputed:

* OpenAI wanted an AI voice that sounds like SJ

* SJ declined

* OpenAI got an AI voice that sounds like SJ anyway

I guess they want us to believe this happened without shenanigans, but it's bit hard to.

The headline of the article is a little funny, because records can't really show they weren't looking for an SJ sound-alike. They can just show that those records didn't mention it. The key decision-makers could simply have agreed to keep that fact close-to-the-vest -- they may have well understood that knocking off a high-profile actress was legally perilous.

Also, I think we can readily assume OpenAI understood that one of their potential voices sounded a lot like SJ. Since they were pursuing her they must have had a pretty good idea of what they were going after, especially considering the likely price tag. So even if an SJ voice wasn't the original goal, it clearly became an important goal to them. They surely listened to demos for many voice actors, auditioned a number of them, and may even have recorded many of them, but somehow they selected one for release who seemed to sound a lot like SJ.

◧◩
2. HarHar+r82[view] [source] 2024-05-23 15:52:05
>>jmull+P12
Clearly an SJ voice was the goal, given that Altman asked her to do it, asked her a second time just two days before the ChatGPT-4o release, and then tweeted "her" on the release day. The next day Karpathy, recently ex-OpenAI, then tweets "The killer app of LLMs is Scarlett Johansson".

Altman appears to be an habitual liar. Note his recent claim not to be aware of the non-disparagement and claw-back terms he had departing employees agree to. Are we supposed to believe that the company lawyer or head of HR did this without consulting (or more likely being instructed by) the co-founder and CEO?!

◧◩◪
3. tptace+B82[view] [source] 2024-05-23 15:52:43
>>HarHar+r82
They hired the actor that did the voice months before they contacted SJ. The reaction on this site to the news that this story was false is kind of mindbending.
◧◩◪◨
4. johnny+xe3[view] [source] 2024-05-23 22:10:05
>>tptace+B82
Tbf here Altman really screwed this over with that tweet and very sudden contacting. There probably wouldn't be much of a case otherwise.

If I had to guess the best faith order of events (more than what OpenAi deserves):

- someone liked Her (clearly)

- they got a voice that sounded like Her, subconsciously (this is fine)

- someone high up hears it and thinks "wow this sounds like SJ!" (again, fine)

- they think "hey, we have money. Why not get THE SJ?!"

- they contact SJ, she refuses and they realize money's isn't enough (still fine. But this is definitely some schadenfreude here)

- marketing starts semi-indepenently, and they make references to Her, becsuse famous AI voice (here's where the cracks start to form. Sadly the marketer may not have even realized what talks went on).

- someone at OpenAi makes one last hail Mary before the release and contacts SJ again (this is where the trouble starts. MAYBE they didn't know about SJ refusing, but someone in the pipeline should have)

- Altman, who definitely should have been aware of these contacts, makes that tweet. Maybe they forgot, maybe they didn't realize the implications. But the lawyer's room is now on fire

So yeah, hanlon's razor. Thus could he a good faith mistake, but OpenAi's done a good job before this PR disaster ruining their goodwill. Again, sweet Schadenfreude even if we are assuming none of this was intentional.

◧◩◪◨⬒
5. BeefWe+pw3[view] [source] 2024-05-24 00:30:50
>>johnny+xe3
Just how many "Good faith mistakes" is a company / CEO permitted to make before a person stops believing the good faith part?
◧◩◪◨⬒⬓
6. johnny+2x3[view] [source] 2024-05-24 00:37:56
>>BeefWe+pw3
I'm a pretty forgiving person, I don't really mind mistakes as long they are 1) admitted to 2) steps are taken to actively reverse course, and 3) guidelines are taken to prevent the same mistakes from happening.

But you more or less drain thst good faith when you are caught with your pants down and decide instead to double down. So I was pretty much against OpenAI ever since the whole "paying for training data is expensive" response during the NYT trials.

----

In general, the populace can be pretty unforgiving (sometimes justified, sometimes not). It really only takes one PR blunder to tank thst good faith. And much longer to restore it.

[go to top]