Occam's Razor argues that Sam simply wanted ScarJo's voice, but couldn't get it, so they came up with a legally probably technically OK but ethically murky clone.
Isn't what OpenAI does all the time? Do ethically murky things, and when people react, move the goal posts by saying "Well, it's not illegal now, is it?".
You want Brad Pitt for your movie. He says no. You hire Benicio Del Toro because of the physical resemblence. Big deal.
Having seen "Her" and many other Scarlet Johansson movies, I didn't think for a second that GPT-4o sounded like her. On the contrary, I wondered why they had chosen the voice of a middle aged woman, and whether that was about being woke. It wasn't until social media went hysterical, I realized that the voices were sort of similar.
Bringing the voice offline and then revealing it was a recording of someone else who coincidentally sounded exactly the same is definitely plan B or C though.
I don't understand how you can trust OpenAI so much to think it was all an accident.
An ordinary person worries all the time about dealing with the legal system. A big company does it all the time.
They're squarely in the zone with knockoff products deliberately aping the branding of the real thing.
"Dr Peppy isn't a trick to piggyback on Dr Pepper, it's a legally distinct brand!" might give you enough of a fig leaf in court with a good lawyer, but it's very obvious what kind of company you're running to anybody paying attention.
Glover sued and won.
There are any number of human-sounding movie AI's, but apparently only one whose actor has specifically and repeatedly rejected this association.
Does he keep getting into ethical hot water because he's a reckless fool, or because he doesn't really care about ethics at all, despite all the theatre?
AIs and automated systems, real and fictional, traditionally use women more than men to voice them. Apparently there was some research among all-male bomber crews that this "stood out", the B-58 was issued with some recordings of Joan Elms (https://archive.org/details/b58alertaudio ) and this was widely copied.
(obvious media exception: HAL)
Similarly, other companies might be inadvertently playing the long game by forming a better (legal/technical) foundation w.r.t. OpenAI and will silently replace them when they slow down or burn themselves out, because I think that OpenAI is at overdrive, and showing signs of overheating now.
This was rocket fuel for activists trying to get a nationwide personality rights law on the books. That would almost certainly increase costs for OpenAI.
Do you think OpenAI did something similar here? In your case there is some expectation from the first movie, OpenAI doesn't have something similar. I'm really for people getting credit for their work/assets and I would be on the individual's side against the bigtech, but I think this case OpenAI and SJ have at hand already is on the path to set a wrong precedent, regardless of if any and which of them wins.
Of the films I've seen anyway.
Do you think nobody said anything like this in an email or Slack?
Can you explain and/or cite the legal basis here? What cases? What law?
(1) I've become tired of the "I honestly don't understand" prefix. Is the person saying it genuinely hoping to be shown better ways of understanding? Maybe, maybe not, but I'll err on the side of charitability.
(2) So, if the commenter above is reading this: please try to take all of this constructively. There are often opportunities to recalibrate one's thinking and/or write more precisely. This is not a veiled insult; I'm quite sincere. I'm also hoping the human ego won't be in the way, which is a risky gamble.
(3) Why is the commenter so sure the other person is delusional? Whatever one thinks about the underlying claim, one would be wise to admit one's own fallibility and thus uncertainty.
(4) If the commenter was genuinely curious why someone else thought something, it would be better to not presuppose they are "delusional". Doing that makes it very hard to curious and impairs a sincere effort to understand (rather than dismiss).
(5) It is muddled thinking to lump the intentions of all of "OpenAI" into one claimed agent with clear intentions. This just isn't how organizations work.
(6) (continuing from (5)...) this isn't even how individuals work. Virtually all people harbor an inconsistent mess of intentions that vary over time. You might think this is hair-splitting, but if you want to _predict_ why people do specific irrational things, you'll find this level of detail is required. Assuming a perfect utility function run by a perfect optimizer is wishful thinking and doesn't match the experimental evidence.
And every one of it's competitors. I think regulatory capture would be just as much, if not more, of a victory for OpenAI.
really weird line of reasoning. Siri, Alexa, Google Home… etc.
Maybe that's just me, and it is a win for them on the whole. Hopefully not.
[1]:https://en.wikipedia.org/wiki/Personality_rights#United_Stat...
Typical sleazy open AI / Sam Altman behaviour, AFAICS.