They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.
It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.
Totally ethically bankrupt.
I think so but that could just be me.
Edit: to clarify, since it is not exactly identical voice, or even not that close, they can plausibly deny it, and we never new what their intention was.
But in this case, they have clearly created the voice to represent Scarlett's voice to demonstrate the capabilities of their product in order to get marketing power.
I'm guessing if any of the Harry Potter actors threatened the hobbyist with legal action the video would likely come down, though I doubt they would bother even if they didn't care for the video.
You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her. That doesn't mean they thought they had to ask permission for the similar voice assistant.
This reads like “we got caught red handed” and doing the bare minimum for it to not appear malicious and deliberate when the timeline is read out in court.
There is a major difference between parodying someone by imitating them while clearly and almost explicitly being an imitation; and deceptively imitating someone to suggest they are associated with your product in a serious manner.
I suspect a video avatar service that looked exactly like her would fall afoul of fair use as well. Though an image gen that used some images of her (and many others) to train and spit out generic "attractive blonde woman" is fair use in my opinion.
But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.
I mean, why not actually compare the voices before forming an opinion?
https://www.youtube.com/watch?v=SamGnUqaOfU
https://www.youtube.com/watch?v=vgYi3Wr7v_g
-----
Answer: because they knew they needed permission, after working so hard to associate with Her, and they hoped that in traditional tech fashion that if they moved fast and broke things enough, everyone would have to reshape around OAs wants, rather than around the preexisting rights of the humans involved.
They likely have a legal position which is defensible.
They're much more worried that they don't have a PR position which is defensible.
What's the point of winning the (legal) battle if you lose the war (of public opinion)?
Given the rest of their product is built on apathy to copyright, they're actively being sued by creators, and the general public is sympathetic to GenAI taking human jobs...
... this isn't a great moment for OpenAI to initiate a long legal battle, against a female movie actress / celebrity, in which they're arguing how her likeness isn't actually controlled by her.
Talk about optics!
(And I'd expect they quietly care much more about their continued ability to push creative output through their copyright launderer, than get into a battle over likeness)
Whether you think it sounds like her or not is a matter of opinion, I guess. I can see the resemblance, and I can also see the resemblance to Jennifer Lawrence and others.
What Johannson is alleging goes beyond this, though. She is alleging that Altman (or his team) reached out to her (or her team) to lend her voice, she was not interested, and then she was asked again just two days before GPT-4o's announcement, and she rejected again. Now there's a voice that, in her opinion, sounds a lot like her.
Luckily, the legal system is far more nuanced than just listening to a few voices and comparing it mentally to other voices individuals have heard over the years. They'll be able to figure out, as part of discovery, what lead to the Sky voice sounding the way it does (intentionally using Johannson's likeness? coincidence? directly trained off her interviews/movies?), whether OpenAI were willing to slap Johannson's name onto the existing Sky during the presentation, whether the "her" tweet and the combination of the Sky voice was supposed to draw the subtle connection... This allegation is just the beginning.
It’s shocking to me how people cannot see this.
The only surprise here is that they didn’t think she’d push back. That is what completes the multilayered cosmic and dramatic irony of this whole vignette. Honestly feels like Shakespeare or Arthur Miller might have written it.
Asking for her vocal likeness is completely in line with just wanting the association with "Her" and the big PR hit that would come along with that. They developed voice models on two different occasions and hoped twice that Johannson would allow them to make that connection. Neither time did she accept, and neither time did they release a model that sounded like her. The two day run-up isn't suspicious either, because we're talking about a general audio2audio transformer here. They could likely fine-tune it (if even that is necessary) on her voice in hours.
I don't think we're going to see this going to court. OpenAI simply has nothing to gain by fighting it. It would likely sour their relation to a bunch of media big-wigs and cause them bad press for years to come. Why bother when they can simply disable Sky until the new voice mode releases, allowing them to generate a million variations of highly-expressive female voices?
That claim could very well be true. The letter requested information on how the voice was trained - OpenAI may not want that can of worms opened lest other celebrities start paying closer attention to the other voices.
Their hubris will walk them right into federal prison for fraud if they’re not careful.
If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
That actually seems like there may be a few people involved and one of them is a cowboy PM who said fuck it, ship it to make the demo. And then damage control came in later. Possibly the PM didn't even know about the asks for permission?
And promoted it using a tweet naming the movie that Johansson performed in, for the role that prompted them to ask her in the first place.
You have to be almost deliberately naive to not see that the were attempting to use her vocal likeness in this situation. There’s a reason they immediately walked it back after the situation was revealed.
Neither a judge, nor a jury, would be so willingly naive.
Maybe (maybe!) it’s worth it for someone like Johansson to take on the cost of that to vindicate her rights—but it’s certainly not the case for most people.
If your rights can only be defended from massive corporations by bringing lawsuits that cost hundreds of thousands to millions of dollars, then only the wealthy will have those rights.
So maybe she wants new legislative frameworks around these kind of issues to allow people to realistically enforce these rights that nominally exist.
For an example of updating a legislative framework to allow more easily vindicating existing rights, look up “anti-SLAPP legislation”, which many states have passed to make it easier for a defendant of a meritless lawsuit seeking to chill speech to have the lawsuit dismissed. Anti-SLAPP legislation does almost nothing to change the actual rights that a defendant has to speak, but it makes it much more practical for a defendant to actually excercise those rights.
So, the assumption that a call for updated legislation implies that no legal protection currently exists is just a bad assumption that does not apply in this situation.
Doesn't sound like they have that either.
Many things that are legal are of questionable ethics. Asking permission could easily just be an effort for them to get better samples of her voice. Pulling the voice after debuting it is 100% a PR response. If there's a law that was broken, pulling the voice doesn't unbreak it.
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.
I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.
So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.
Her voice alone didn’t get her there — she did. That’s why celebrities are so protective about how their likeness is used: their personal brand is their asset.
There’s established legal precedent on exactly this—even in the case they didn’t train on her likeness, if it can reasonably be suspected by an unknowing observer that she personally has lent her voice to this, she has a strong case. Even OpenAI knew this, or they would not have asked in the first place.
When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?
Maybe if they replicated a scene from the A's movies or there was striking likeness between the voices... but not generally.
If a PM there didn’t say “fuck it ship it even without her permission” they’d probably be replaced with someone who would.
I expect the cost of any potential legal action/settlement was happily accepted in order to put on an impressive announcement.
I think the copyright industry wants to grab new powers to counter the infinite capacity of AI to create variations. But that move would knee cap the creative industry first, newcomers have no place in a fully copyrighted space.
It reminds me of how NIMBY blocks construction to keep up the prices. Will all copyright space become operated on NIMBY logic?
Buckle in, go to court, and double-down on the fact that the public's opinion of actors is pretty damn fickle at the best of times - particularly if what you released was in fact based on someone you signed a valid contract with who just sounds similar.
Of course, this is all dependent on actually having a complete defense of course - you absolutely would not want to find Scarlett Johannsen voice samples in file folders associated with the Sky model if it went to court.
Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.
I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.
People who hate Hollywood? Most of that crowd hates tech even more.
* Because it would take the first news cycle to be branded as that
I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.
I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.
The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.
From elsewhere in the thread, likeness rights apparently do extend to intentionally using lookalikes / soundalikes to create the appearance of endorsement or association.
Given the timeline it sounds like the PM was told "just go ahead with it, I'll get the permission".
There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:
https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...
>If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.
>When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...
I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.
In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.
> Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.
So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.
I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.
Which is what this would be in the not-stupid version of events: they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".
The issue of course is that OpenAIs reactions so far don't seem to indicate that they're actually confident they can prove this or that this is the case. Coz if this is actually the case, they're going about handling this in the dumbest possible way.
The scenario would have been that they approach none.
You realise that there are multiple employees including the CEO publicly drawing direct comparisons to the movie Her after having tried and failed twice to hire the actress who starred in the movie? There is no non idiotic reading of this.
That's what I'm discussing.
Edit: which is to say, I think Sam Altman may have been a god damn idiot about this, but it's also wild anyone thought that ScarJo or anyone in Hollywood would agree - AI is currently the hot button issue there and you'd find yourself the much more local target of their ire.
Who is the underdog in this situation? In your comment it seems like you're framing OpenAI as the underdog (or perceived underdog) which is just bonkers.
Hacker News isn't a hivemind and there are those of us who work in GenAI who are firmly on the side of the creatives and gasp even rights holders.
There are quite a few issues here: First, this is assuming they actually hired a voice-alike person, which is not confirmed. Second, they are not an underdog (the voice actress might be, but she's most likely pretty unaffected by this drama). Finally, they were clearly aiming to impersonate ScarJo (as confirmed by them asking for permission and samas tweet), so this is quite a different issue than "accidentally" hiring someone that "just happens to" sound like ScarJo.
Did you?
> Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.
Jobs responds minutes later... "Fuck the lawyers."
One very easy explanation is that they trained Sky using another voice (this is the claim and no reason to doubt it is true) wanting to replicate the stye of the voice in "Her", but would have preferred to use SJ's real voice for the PR impact that could have.
Yanking it could also easily be a pre-emptive response to avoid further PR drama.
You will obvious decide you don't believe those explanations, but to many of us they're quite plausible, in fact I'd even suggest likely.
(And none of this precludes Sam Altman and OpenAI being dodgy anyway)
My view is, of course it is ok. SJ doesn't own the right to a particular style of voice.
I thought about your comment for a while, and I agree that there is a fine line between "realistic parody" and "intentional deception" that makes deepfake AI almost impossible to defend. In particular I agree with your distinction:
- In matters involving human actors, human-created animations, etc, there should be great deference to the human impersonators, particularly when it involves notable public figures. One major difference is that, since it's virtually impossible for humans to precisely impersonate or draw one another, there is an element of caricature and artistic choice with highly "realistic" impersonations.
- AI should be held to a higher standard because it involves almost no human expression, and it can easily create mathematically-perfect impersonations which are engineered to fool people. The point of my comment is that fair use is a thin sliver of what you can do with the tech, but it shouldn't be stamped out entirely.
I am really thinking of, say, the Joe Rogan / Donald Trump comedic deepfakes. It might be fine under American constitutional law to say that those things must be made so that AI Rogan / AI Trump always refer to each other in those ways, to make it very clear to listeners. It is a distinctly non-libertarian solution, but it could be "necessary and proper" because of the threat to our social and political knowledge. But as a general principle, those comedic deepkfakes are works of human political expression, aided by a fairly simple computer program that any CS graduate can understand, assuming they earned their degree honestly and are willing to do some math. It is constitutionally icky (legal term) to go after those people too harshly.
I think you and I have the same concerns about balancing damage to the societal fabric against protecting honest speech.
The answer is without legislation you are far more subject to whether a judge feels like changing the law.
You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: >>40424566
This is the paragraph that was intended to narrow what I was talking about:
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.
Public figures own their likeness and control its use. Not to mention that in this case OA is playing chicken with studios as well. Not a great time to do so, given their stated hopes of supplanting 99% of existing Hollywood creatives.
Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?