Also, whether they *intended* to replicate Her and whether they *did* in the end are very different.
While Sky's voice shares similar traits to SJ, it sounds different enough that I was never confused as to whether it was actually SJ or not.
To be honest the new sky is obnoxious and overly emotive. I’m not trying to flirt with my phone.
Leaving the IP issue aside, they could clearly have hired a voice actor to closely resemble Johansson maybe without additional tweaks to the voice in post processing. If they did do that, I am not totally sure what position to take on the matter
I think they might have mimicked the style. The voice, though, is not even close. If I heard both voices in a conversation, I would have thought 2 different people were talking.
I mean voice cloning a year or two ago was basically science fiction, now we’re talking about voices being distinguishable as proof it’s not cloned, sourced, or based on someone.
FWIW I also thought it was supposed to be the her/sj voice for a long time, until I heard them side by side. Not sure where to stand on the issue, so I’m glad I’m on the sidelines :)
And I wouldn't put the metric at 50/50, needs to be indistinguishable. It would be a reasonable amount where it sounds __like__, which could be identifying the chatbot 100% of the time! (e.g. what if I just had a roboticized version of a person's voice) Truth is that I can send you clips of the same person[0], tell you they're different people, and a good portion of people will be certain that these are different people (maybe __you're different__™, but that doesn't matter).
So use that as the litmus test in either way. Not if you think they are different, but rather "would a reasonable person think this is supposed to sound like ScarJo?" Not you, other people. Then, ask yourself if there was sufficient evidence that OpenAI either purposefully intended to clone her voice OR got so set in their ways (maybe after she declined, but had hyped themselves up) that they would have tricked themselves into only accepting a voice actor that ended up sounding similar. That last part is important because it shows how such a thing can happen without ever explicitly (and maybe even not recognizing themselves) stating such a requirement. Remember that us humans do a lot of subconscious processing (I have a whole other rant on people building AGI -- a field I'm in fwiw -- not spending enough time understanding their minds or the minds of animals).
Edit:
[0]I should add that there's a robustness issue here and is going to be a distinguishing factor for people determining if the voices are different. Without a doubt, those voices are "different" but the question is in what way. The same way someone's voice might change day to day? The difference similar to how someone sounds on the phone vs in person? Certainly the audio quality is different and if you're expecting a 1-to-1 match where we can plot waveforms perfectly, then no, you wouldn't ever be able to do this. But that's not a fair test
However, the fact that there is a debate at all proves there should be more of an investigation done.