mainstream adoption hasn't been that great - now there's drama
> Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They're not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter. This quality may be redundant though; it may be implied by imagination.
> Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.
"What We Look for in Founders", PG
https://paulgraham.com/founders.html
I think the more powerful you become, the less endearing this trait is.
But it must feel pretty fucking weird and violatory when you spend your entire life thinking about how you are going to deliver certain lines and that’s your creative Body of work, and then for someone to just take that Voice and apply it to any random text that can be generated?
I get why she wouldn’t want to let it go.
In a way it is similar to how a developer might feel about their code being absorbed, generalized, and then regurgitated almost verbatim as part of some AI responses
But in the case of voice it’s even worse as the personality impression is contained in the slightest utterance… whereas a style of coding Or a piece of code might be less Recognizable, and generally applicable to such a wide range of productions
Voice is the original human technology, To try to take that from someone without their consent is a pretty all encompassing grab
[0]: https://www.mercurynews.com/2021/07/20/how-the-doodle-god-un...
[1] https://variety.com/2024/digital/news/openai-pulls-scarlett-...
Stuff from her comes via press agents, which is generally sent directly to reporters.
Open for business, Open to suggestions, and Open season for any lawyers that want a piece of the sizable damages.
The wink wink at creating an AI girlfriend is so bizarre
I guess we know who their target user base is
Not a bad call for someone already rich
To suggest that Johansson’s only appeal is to the opposite gender (and ‘lonely’ ones at that!) I think is myopic and reductive of her impact
She has the resources to fight back and make an example of them, and they have the resources to make it worthwhile.
Of course, Twitter continues to bring people with big egos to their own downfall.
- OpenAI approached Scarlett last fall, and she refused.
- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)
- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.
Perhaps Sam’s next tweet should read “red-handed”.
(Which wasn't even the taxi drivers, although they were plenty bad enough on their own.)
However Sam tweeted "her" which is literally the movie where she voices the AI girlfriend. And then made a synthetic replica of her voice the star of their new demo against her wishes.
It's pretty direct what he is pitching at.
[1] https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
[2] https://www.hackingbutlegal.com/p/statement-by-annie-altman-...
https://en.m.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.
Bette Middler successfully sued Ford for impersonating her likeness in a commercial.
Then also:
https://casetext.com/case/waits-v-frito-lay-inc
Tom Waits successfully sued Frito Lay for using an imitator without approval in a radio commercial.
The key seems to be that if someone is famous and their voice is distinctly attributeable to them, there is a case. In both of these cases, the artists in question were also solicited first and refused.
It is quite possible that OpenAI has synthesized the voice from SJ material.
However If OpenAI can produce the woman who did is the current voice, and she has a voice nearly identical that of SJ would that mean OpenAI had done something wrong?
Does SJ since she is a celebrity hold a "patent" right to sound like her.
The more likely scenario is that they have hired a person and told her to try and imitate how SJ sounds.
What is the law on something like that?
> In a novel case of voice theft, a Los Angeles federal court jury Tuesday awarded gravel-throated recording artist Tom Waits $2.475 million in damages from Frito-Lay Inc. and its advertising agency.
> The U.S. District Court jury found that the corn chip giant unlawfully appropriated Waits’ distinctive voice, tarring his reputation by employing an impersonator to record a radio ad for a new brand of spicy Doritos corn chips.
https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...
but it doesn’t matter, because how he may have marketed it in a 140 character tweet does not encompass the entirety of how it could be used, of course
It's not just successful companies though. There is a bit of ego necessary in a founder that makes them think their idea or their implementation of a thing is better so that it needs to be its own company. Sometimes though they even get caught up in their own reality distortion fields with obviously bad ideas or ideas implemented badly due to their own arrogance that ultimately fails.
To them*
Which is the whole problem. These narcissistic egotists think they, alone, individually, are capable of deciding what's best not just for their companies but for humanity writ large.
A company can't take a photo from your Facebook and plaster it across an advertisement for their product without you giving them the rights to do that.
And if you're a known public figure, this includes lookalikes and soundalikes as well. You can't hire a ScarJo impersonator that people will think is ScarJo.
This is clearly a ScarJo soundalike. It doesn't matter whether it's an AI voice or clone or if they hired someone to sound just like her. Because she's a known public figure, that's illegal if she hasn't given them the rights.
(However, if you generate a synthetic voice that just happens to sound exactly like a random Joe Schmo, it's allowed because Joe Schmo isn't a public figure, so there's no value in the association.)
Taking taxis 15 years ago was an absolute scammy shitty experience and it’s only marginally better now thanks to an actual competitive marketplace
And in terms of collaboration potential... OpenAI is a big draw for businesses and a subset of tech enthusiasts, but I don't think artists in any industry are dying to collaborate with them.
> They delight in breaking rules, but not rules that matter.
The question becomes "what rules matter?". And the answer inevitably becomes "only the ones that work in my favor and/or that I agree with".
I think someone trying to defend this would go "oh come on, does it really matter if a rich actress gets slightly richer?" And no, honestly, it doesn't matter that much. Not to me, anyway. But it matters that it establishes (or rather, confirms and reinforces) a culture of disregard and makes it about what you think matters, and not about what someone else might think matters about the things in their life. Their life belongs to them, a fact that utopians have forgotten again and again everywhere and everywhen. And once all judgement is up to you, if you're a sufficiently ambitious and motivated reasoner (and the kind of person we're talking about here is), you can justify pretty much whatever you want without that pesky real-world check of a person going "um actually no I don't want you to do that".
Sometimes I think anti-tech takes get this wrong. They see the problem as breaking the rules at all, as disrupting the status quo at all, as taking any action that might reasonably be foreseen to cause harm. But you do really have to do that if you want to make something good sometimes. You can't foresee every consequence of your actions - I doubt, for example, that Airbnb's founders were thinking about issues with housing policy when they started their company. But what differentiates behavior like this from risk-taking is that the harm here is deliberate and considered. Mistakes happen, but this was not a mistake. It was a choice to say "this is mine now".
That isn't a high bar to clear. And I think we can demand that tech leaders clear it without stifling the innovation that is tech at its best.
I mention this specifically because I remember mark andreseen comment something similar in lex fridman's podcast, something along the lines of getting "those creative people" together to build on ai.
If you imitate Darth Vader, I don't think James Earl Jones has as much case for likeliness as Star Wars franchise
And not see how over the top it is... cmon.
Altman and OpenAI will walk over everyone here without any difficulty if they decide to take whats ours.
OpenAI pulls Johansson soundalike Sky’s voice from ChatGPT - >>40414249 - May 2024 (96 comments)
If so, I suspect they’ll be okay in a court of law — having a voice similar to a celebrity isn’t illegal.
It’ll likely cheese off actors and performers though.
[1] https://www.forbes.com/sites/roberthart/2024/05/20/openai-sa...
OpenAI is trying to demonstrate how it's so trustworthy, and is always talking about how important it is to be trustworthy when it comes to something as important and potentially dangerous as AI.
And then they do something like this...??
I literally don't understand how they could be this dumb. Do they not have a lawyer? Or do they not tell their corporate counsel what they're up to? Or just ignore the counsel when they do?
Doesn't matter around similarity. There was nothing fair-use around this voice and it is exactly why OpenAI yanked the voice and indirectly admitted to cloning her voice.
[0 >>38154733 ]
https://www.hollywoodreporter.com/business/business-news/bac...
If you just want ScarJo's (or James Earl Jones') voice, you need the rights from them. Period.
If you want to reuse the character of her AI bot from the movie (her name, overall personality, tone, rhythm, catchphrases, etc.), or the character of Darth Vader, you also need to license that from the producers.
And also from ScarJo/Jones if you want the same voice to accompany the character. (Unless they've sold all rights for future re-use to the producers, which won't usually be the case, because they want to be paid for sequels.)
They have similar voices, but SJ has more bass and rasp.
And if it's true that OpenAI hired a different actor, then this should basically be case closed.
The voice of Sky (assuming that's the same as the demo video), sounds like a run of the mill voice actor tbh. Great, but not that interesting or unique.
New voice mode is a speech predicting transformer. "Voice Cloning" could be as simple as appending a sample of the voice to the context and instructing it to imitate it.
Altman and Murati are world-class grifters but until now they were stealing from print media and digital artists. Now they’re clashing with some of the most litigious industries with the deepest pockets. They’re not going to win this one.
If I thought I had a great idea, I would want people to try to poke holes in it. Yet founders often universally seem to be the incredibly sensitive and insecure about their idea.
Still live for me? Unless the Sky I’m getting is a different one?
I'll ask the devil's advocate / contrarian question: How big a slice of the human voice space does Scarlett lay a claim to?
The evidence would be in her favor in a civil court case. OTOH, a less famous woman's claim that any given synthesized voice sounds like hers would probably fail.
Contrast this with copyrighted fiction. That space is dimensionally much bigger. If you're not deliberately trying to copy some work, it's very unlikely that you'll get in trouble accidentally.
The closest comparison is the Marvin Gaye estate's case. Arguably, the estate laid claim to a large fraction of what is otherwise a dimensionally large space. https://en.wikipedia.org/wiki/Pharrell_Williams_v._Bridgepor...
They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.
It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.
Totally ethically bankrupt.
If they accidentally hired someone who sounds identical, that's not illegal. But if they intended to, even if it is a pretty poor imitation, it would be illegal because the intent to do it was there.
A court of law would be looking for things like emails about what sort of actress they were looking for, how they described that requirement, how they evaluated the candidate and selected her, and of course, how the CEO announced it alongside a movie title Scarlett starred in.
I think so but that could just be me.
it takes more than money to fuel these types, and they would have far better minders and bumpers if the downside outweighed the upside. they aren’t stupid, just addicted.
musk was addict smart, owned up to his proclivities and bought the cartel.
Edit: to clarify, since it is not exactly identical voice, or even not that close, they can plausibly deny it, and we never new what their intention was.
But in this case, they have clearly created the voice to represent Scarlett's voice to demonstrate the capabilities of their product in order to get marketing power.
"when voice is sufficient indicia of a celebrity's identity, the right of publicity protects against its imitation for commercial purposes without the celebrity's consent."
Them trying (and failing) to negotiate the rights, and then them vaguely attempting again 2 days before launch, and fucking Altman tweeting a quite obvious reference to a movie in which SJ is the voice of an AI girlfriend - leans very very strongly in the direction of "active and intentional imitation".
Anybody trying to claim some accidental or coincidental similarity here has a pretty serious credibility hole they need to start digging themselves out of.
Maybe only when the director's instructions are "I want you to sound like XYZ".
Don't hear any arguments on how this is fair-use. (It isn't)
Why? Because everyone (including OpenAI) knows it clearly isn't fair-use even after pulling the voice.
I'm guessing if any of the Harry Potter actors threatened the hobbyist with legal action the video would likely come down, though I doubt they would bother even if they didn't care for the video.
> Why?
Because it's a right of personality issue, not copyright, and there is no fair use exception (the definition of the tort is already limited to a subset of commercial use which makes the Constitutional limitation that fair use addresses in copyright not relevant, and there is no statutory fair use exception to liability under the right of personality.)
When is it infringing to make something that looks or sounds like somebody famous? I mean, there's only so many ways a human voice voice can sound or face can look. At what point are entire concepts locked down just because somebody famous exists or existed that pattern matches.
it doesn't seem like principles should matter. but then the bill of rights doesn't seem like it should matter either if you were to cold read the constitution (you might be like - hmm, kinda seems important maybe...).
it compounds culturally over time though. principles ^ time = culture.
"Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented."
https://archive.is/wLOfC#selection-1095.112-1095.200
maybe "thoughtful" was the closest (and sam is apologetic and regretful and transparent - kudos to him for that). but it's not that clear without a core principle around responsibility. you need that imho to avoid losing trust.
Why would people not want laws? The answer is so they can do the things that the laws prevent.
This is POSIWID territory [0]. "The purpose of a system is what it does". Not what it repeatedly fails to live up to.
What was the primary investment purpose of Uber? Not any of the things it will forever fail to turn a profit at. It was to destroy regulations preventing companies like Uber doing what they do. That is what it succeeded at.
The purpose of OpenAI is to minimise and denigrate the idea of individual human contributions.
[0] https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
That said, Krazam covered this topic well already https://www.youtube.com/watch?v=KiPQdVC5RHU
For example, a car company approached the band sigur ros to include some of their music in a car commercial. Sigur ros declined. A few months later the commercial airs with a song that sounds like an unreleased sigur ros song, but really they just paid a composer to make something that sounds like sigur ros, but isn't. So maybe openai just had a random lady with a voice similar to Scarlett so the recording.
Taking down the voice could just be concern for bad press, or trying to avoid lawsuits regardless of whether you think you are in the right or not. Per this* CNN article:
> Johansson said she hired legal counsel, and said OpenAI “reluctantly agreed” to take down the “Sky” voice after her counsel sent Altman two letters.
So, Johansson's lawyers probably said something like "I'll sue your pants off if you don't take it down". And then they took it down. You can't use that as evidence that they are guilty. It could just as easily be the case that they didn't want to go to court over this even if they thought they were legally above board.
* https://www.cnn.com/2024/05/20/tech/openai-pausing-flirty-ch...
You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her. That doesn't mean they thought they had to ask permission for the similar voice assistant.
Saw some other posters expressing this view who deleted their posts after getting downvoted, lol.
One thing these trained voices make clear is that it's a tts engine generating ChatGPT-4o's speech, same as before. The whole omni-modal spin suggesting that the model is natively consuming and generating speech appears to be bunk.
Is there a distinction?
Are they trying to make it sound like Her, or SJ? Or just trying to go for a similar style? i.e. making artistic choices in designing their product
Note: I've never watched the movie.
Yeah. _That_ well known completely rational and unquestionably incorruptible institution.
I would bet Altman had been to more teenage sex parties and paid for more holidays with SC judges that Scarlett has... :sigh:
> But OpenAI's chief technology officer, Mira Murati, has said that GPT-4o's voice modes were less inspired by Her than by studying the "really natural, rich, and interactive" aspects of human conversation, The Wall Street Journal reported.
People made fun of Murati when she froze after being asked what Sora was trained on. But behavior like that indicates understanding that you could get the company sued if you said something incriminating. Altman just tweets through it.
[1] https://arstechnica.com/tech-policy/2024/05/openai-pauses-ch...
OP seems to be on the "They secretly trained on her voice" train. The only reason "Two Days!" would be damning is if a finetune was in order to replicate ScarJo's voice. In that sense, it's much better.
>(C) it's extraordinarily reckless to go from zero to big-launch feature in less than two days.
Open AI have launched nothing. There's no date for new voice mode other than, "alpha testing in the coming weeks to plus users". No-one has access to it yet.
I mean I know he has hundreds of blind followers but good Lord, you would think that the man, with all his years of experience had some sense to introspect about what he is trying to achieve vs how he is going about it.
Money really does blind all our senses, doesn't it?
That said, the timeline she lays out is damning indeed.
That is what matters. OWNERSHIP over her contributions to the world.
The fact that they reached out to her multiple times and insinuated it was supposed to sound like her with Sam's "her" tweet makes a pretty clear connection to her. Without that they'd probably be fine.
Bette Midler sued Ford under very similar circumstances and won.
As for why it's bad, it's because they set down the precedent of wanting specifically Scarlett Johansen's voice, got declined, doubled down, got declined again, and then went ahead and did it anyway. They can say in their own defense that it's some other voice actress that sounds similar, ok, so produce that name, tell us who she is.
Absent that, it's Johansen's voice, clipped from movies and interviews and shows and whatever.
Just find someone who sounds like her, then hire them for the rights to their voice.
As are disclaimers that celebrity voices are impersonated when there is additional context which makes it likely that the voice would be considered something other than a mere soundalike, like direct reference to a work in which the impersonated celebrity was involved as part of the same publicity campaign.
And liability for commercial voice appropriation, even by impersonation, is established law in some jurisdictions, including California.
There's not a lot of precedent around voice impersonation, but there is for a very, very similar case against Ford
There's an ocean of difference between mimicking the style of someone's art in an original work, and literally cloning someone's likeness for marketing/business reasons.
You can hire someone to make art in the style of Taylor Swift, that's OK.
You can't start selling Taylor Swift figurines by the same principle.
What Sam Altman did, figuratively, was giving out free T-Shirts featuring a face that is recognized as Taylor Swift by anyone who knows her.
That is...not a pretty picture. We desperately need someone else at the helm of OpenAI.
An issue with voice actors having their voice stolen by AI models/voice cloning tech is that they have no legal standing because their performance is owned by their client, and therefore no ownership. ScarJo may not have standing, depending on the contract (I suspect hers is much different than typical VA). It might have to be Annapurna Pictures that sues OpenAI instead.
Forbes had a good story about performer rights of voices: https://www.forbes.com/sites/rashishrivastava/2023/10/09/kee...
IANAL of course.
This reads like “we got caught red handed” and doing the bare minimum for it to not appear malicious and deliberate when the timeline is read out in court.
Yes, that would be a copyright violation on top of everything else.
Great idea though!
I'm going to start selling Keanu Reeves T-Shirts using this little trick.
See, I'm not using Keanu's likeness if I don't label it as Keanu. I'm just going to write Neo in a Tweet, and then say I'm just cloning Neo's likeness.
Neo is not a real person, so Keanu can't sue me! Bwahahaha
There is no doubt that the hired actor was an impersonator, this was explicitly stated by scama himself.
There is a major difference between parodying someone by imitating them while clearly and almost explicitly being an imitation; and deceptively imitating someone to suggest they are associated with your product in a serious manner.
Nevertheless. This is still incredibly embarrassing for OpenAI. And totally hurts the company’s aspiration to be good for humanity.
We know nothing about their offer to her. Could have just been a bad deal
Your argument may be stronger if OpenAI said something like “the movie studio owns the rights to this character’s likeness, so we approached them,” but it’s not clear they attempted that.
I suspect a video avatar service that looked exactly like her would fall afoul of fair use as well. Though an image gen that used some images of her (and many others) to train and spit out generic "attractive blonde woman" is fair use in my opinion.
I have no idea if they really used her voice, or it is a voice that just sounds like her to some. I'm just saying openai's behavior isn't a smoking gun.
But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.
I mean, why not actually compare the voices before forming an opinion?
https://www.youtube.com/watch?v=SamGnUqaOfU
https://www.youtube.com/watch?v=vgYi3Wr7v_g
-----
Answer: because they knew they needed permission, after working so hard to associate with Her, and they hoped that in traditional tech fashion that if they moved fast and broke things enough, everyone would have to reshape around OAs wants, rather than around the preexisting rights of the humans involved.
If that box wasn’t in your bingo card I’m sorry, it’s basically the center/free box at this point.
"Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice."
https://openai.com/index/how-the-voices-for-chatgpt-were-cho...
I'm not familiar with the specifics of how AI models work but doesn't the ability from some of the demos rule out what you've said above? Eg. The speeding up and slowing down speech and the sarcasm don't seem possible if TTS was a separate component
She may have something only if it turns out that the training set for that voice is composed of some recordings of her (the person, not the movie), which I highly doubt and is, unfortunately, extremely hard to prove. Even that wouldn't be much, though, as it could be ruled a derivative work, or something akin to any celebrity impersonator. Those guys can even advertise themselves using the actual name of the celebrities involved and it's allowed.
Me personally, I hope she takes them to court anyway, as it will be an interesting trial to follow.
An interesting facet is, copyright law goes to the substance of the copyrighted work; in this case, because of the peculiarities of her character in "her", she is pretty much only voice, I wonder if that make things look different to the eyes of a judge.
Who would have thought we would be discussing voice theft someday.
https://nwn.blogs.com/nwn/2024/04/adam-schiff-ai-video-games...
Usually those people are considered sociopaths.
Maybe it's time to ask the employees of OpenAI who fought to get Altman back, How this behavior is compatible with their moral standards or whether money is the most important thing.
I don't know guys, the super hyped up company with next-gen technology might just be using crime, underhanded tactics, and overstating their capabilities to pull in the thing we all love... and it's not each other or your friend's mother!
It's money!
OpenAI did nothing wrong.
The movie industry does the same thing all the time. If an actor/actress says no they you find someone else who can play the same role.
The movie industry does this all the time.
Johansson is probably suing them so they're forced to remove the Sky voice while the lawsuit is happening.
I'm not a fan of Sam Altman or OpenAI but they didn't do anything wrong here.
>"I don't know about the voice. I actually had to go and listen to Scarlett Johansson's voice," Murati said.
Seems like a big part of Mira’s job is not knowing things. How is no one questioning how she landed a VP job at OpenAI 2 years after being an L5 PM?
This mode works entirely differently from what Open AI demoed a few days ago (the new voice mode) but both seem to utilize the same base sky voice. All this uproar is from the demos of new sky which sounds like old sky but is a lot more emotive, laughs, a bit flirty etc.
Are companies better off not even trying to negotiate to begin with?
If they really hired someone who sounds just like her it's fair game IMO. Johanssen can't own the right to a similar voice just like many people can have the same name. I think if there really was another actress and she just happens to sound like her, then it's really ok. And no I'm not a fan of Altman (especially his worldcoin which I view as a privacy disaster)
I mean, imagine if I happened to have a similar voice to a famous actor, would that mean that I couldn't work as a voice actor without getting their OK just because they happen to be more famous? That would be ridiculous. Pretending to be them would be wrong, yes.
If they hired someone to change their voice to match hers, that'd be bad. Yeah. If they actually just AI-cloned her voice that's totally not OK. Also any references to the movies. Bad.
If they had just screened a bunch of voice actors and chosen the same one no one would care (legally or otherwise).
But if it isn't, then it is more like selling a figurine called Sally that happens to look a lot like Taylor Swift. Sally has a right to exist even if she happens to look like Taylor Swift.
Has there ever been an up and coming artist who was not allowed to sell their own songs, because they happened to sound a lot like an already famous artist? I doubt it.
As to whether she owns the rights of that performance or somebody else, we'd have to read the contract; most likely she doesn't, though.
Because they don't think about the consequences, and don't want to. Better to retreat into the emotional safety of techno-fantasy and feeling like you're on the cutting edge of something big and new (and might make some good money in the process). Same reason people got into NFTs.
And would they need to use a voice actor when there is a substantial body of movie dialogue and interviews? I'd be surprised if they'd bothered.
All the while many people believe them at every step.
There's no way "you" (the people that engage in these tactics) believe anyone is that gullible to not see what's happening. You either believe yourselves to be exceedingly clever or everyone else has the intelligence of toddler.
With the gumption some tech "leaders" display, maybe both.
If you have to say "technically it's not" 5x in a row to justify a position in a social context just short-circuit your brain and go do something else.
Where are the signs or symbols tying Scarlett to the openAI voice? I don't think a single word, contextless message on a separate platform that 99% of openAI users will not see is significant enough to form that connection in users heads.
They likely have a legal position which is defensible.
They're much more worried that they don't have a PR position which is defensible.
What's the point of winning the (legal) battle if you lose the war (of public opinion)?
Given the rest of their product is built on apathy to copyright, they're actively being sued by creators, and the general public is sympathetic to GenAI taking human jobs...
... this isn't a great moment for OpenAI to initiate a long legal battle, against a female movie actress / celebrity, in which they're arguing how her likeness isn't actually controlled by her.
Talk about optics!
(And I'd expect they quietly care much more about their continued ability to push creative output through their copyright launderer, than get into a battle over likeness)
You can cheer on "forces" like Uber all you like but I would prefer it if progress happened without criminal deception:
https://www.theguardian.com/news/2022/jul/10/uber-files-leak...
I don't see how anyone can read this and think the uber app is a net positive.
Imitating a movie AI was a cool idea and imitation was the only legal way to do it.
Do you pull your hair when companies advertise with Elvis impersonators?
Nobody was significantly harmed by this, I can guarantee the rich people that use hacker news consume things from much less savory standards than imitating a celebrity.
Nestlé is strong but you pull the plug at THIS?
Pg has done worse and he owns this forum.
Have some perspective.
This doesn't make any sense. If it's a speech to speech transformer then 'training' could just be a sample at the beginning of the context window. Or it could one of several voices used for the Instruct-tuning or RLHF process. Either way, it doesn't debunk anything.
OpenAI has gone the "it's easier to ask forgiveness than permission" route, and it seemed like they might get away with that, but if this results in a lot more stories like this they'll risk running afoul of public opinion and future legislation turning sharply against them.
If someone licenses an impersonator's voice and it gets very close to the real thing, that feels like an impossible situation for a court to settle and it should probably just be legal (if repugnant).
(and given the timeline ScarJo laid out in her Twitter feed, I'd be inclined to vote to convict at the present moment)
Whether you think it sounds like her or not is a matter of opinion, I guess. I can see the resemblance, and I can also see the resemblance to Jennifer Lawrence and others.
What Johannson is alleging goes beyond this, though. She is alleging that Altman (or his team) reached out to her (or her team) to lend her voice, she was not interested, and then she was asked again just two days before GPT-4o's announcement, and she rejected again. Now there's a voice that, in her opinion, sounds a lot like her.
Luckily, the legal system is far more nuanced than just listening to a few voices and comparing it mentally to other voices individuals have heard over the years. They'll be able to figure out, as part of discovery, what lead to the Sky voice sounding the way it does (intentionally using Johannson's likeness? coincidence? directly trained off her interviews/movies?), whether OpenAI were willing to slap Johannson's name onto the existing Sky during the presentation, whether the "her" tweet and the combination of the Sky voice was supposed to draw the subtle connection... This allegation is just the beginning.
i.e. intent matters.
In this case, since the other voice actor has a clearly different voice than SJ, it seems like their intent is to just copy the general 'style' of the voice, and not SJ's voice itself. Speculative though.
No one is harmed.
How can you say that when they literally approached SJ for her voice, and then asked the voice actor to reproduce SJ's voice?!
This is different from how voice cloning models like ElevenLabs works.
I wonder if they deliberately steered towards this for more marketing buzz?
This is a civil issue, and actors get broad rights to their likeliness. Kim Kardashian sued Old Navy for using a look-alike actress in an ad; old Navy chose to settle, which makes it appear like "the real actress wasn't involved in any way" may not be a perfect defense. The timeline makes it clear they wanted it to sound like Scarlett's voice, the actual mechanics on how they got the AI to sound like that is only part of the story.
Maybe Altman lands in jail or files for bankruptcy after all the dust settles.
It’s shocking to me how people cannot see this.
The only surprise here is that they didn’t think she’d push back. That is what completes the multilayered cosmic and dramatic irony of this whole vignette. Honestly feels like Shakespeare or Arthur Miller might have written it.
And hasn't OpenAI recently shown that they can pull off a commercial coup d'état, unscathed?
Why would they not simply also take the voice of some actress? That's small potatoes.
No one is going to push back against OpenAI meaningfully.
People are still going to use ChatGPT to cheat on their homework, to phone-in their jobs, and to try to ride OpenAI's coattails.
The current staff have already shown they're aligned with the coup.
Politicians and business leaders befriend money.
Maybe OpenAI will eventually settle with the actress, for a handful of coins they found in the cushions of their trillion-dollar sofa.
Seems pretty reckless to not have alternatives just in case Scarlett refused.
Asking for her vocal likeness is completely in line with just wanting the association with "Her" and the big PR hit that would come along with that. They developed voice models on two different occasions and hoped twice that Johannson would allow them to make that connection. Neither time did she accept, and neither time did they release a model that sounded like her. The two day run-up isn't suspicious either, because we're talking about a general audio2audio transformer here. They could likely fine-tune it (if even that is necessary) on her voice in hours.
I don't think we're going to see this going to court. OpenAI simply has nothing to gain by fighting it. It would likely sour their relation to a bunch of media big-wigs and cause them bad press for years to come. Why bother when they can simply disable Sky until the new voice mode releases, allowing them to generate a million variations of highly-expressive female voices?
you are just making that up afaict.
> How can you say that when they literally approached SJ for her voice
Almost by definition SJ's voice will match the style of 'Her', at least for awhile (*). So why not ask SJ first?
(*) voices change significantly over time.
I read that whole article. I didn't know about the intentional strategy to send Uber drivers into likely violent situations. That's fucked up.
Most of that article seemed to focus on Uber violating laws about operating taxi services though. Sounds good to me? Like there's nothing intrinsically morally correct about taxi service operation laws. This sort of proves my point too. Some company was going to have to fight through all that red tape to get app-based taxis working and maybe it's possible to do that without breaking the law, but if it's easier to just break the law and do it, then whatever. I can't emphasize how much I don't care about those particular laws being broken and maybe if I knew more about them I'd even be specifically happy that those laws were broken.
It would be interesting to hear the details, but what OpenAI seem to have done is build a neural net based speech synthesizer which is similarly flexible because it it generating the audio itself (not stitching together samples) conditioned on the voice ("Sky", etc) it is meant to be mimicking. Dialing the emotion up/down is basically affecting the prosody and intonation. The singing is mostly extending vowel sounds and adding vibrato, but it'd be interesting to hear the details. In the demo Brockman refers to the "singing voice", so not clear if they can make any of the 5 (now 4!) voices sing.
In any case, it seems the audio is being generated by some such flexible tts, not just decoded from audio tokens generated by the model (which anyways would imply there was something - basically a tts - converting text tokens to audio tokens). They also used the same 5 voices in the previous ChatGPT which wasn't claiming to be omnimodal, so maybe basically the same tts being used.
That claim could very well be true. The letter requested information on how the voice was trained - OpenAI may not want that can of worms opened lest other celebrities start paying closer attention to the other voices.
Scarlett Johansson's character in the movie is not Scarlett Johansson although her voice is very similar. I wouldn't say it's identical.
They didn’t come to an agreement to use her voice, they used her voice. And they obviously knew it was a problem because they went back to her like the night before trying to get her approval again.
The correct thing to do was NOT use her voice.
You don’t get to steal something, get all the benefit from it (the press coverage), and then say “oops never mind it was just a few hours you can’t sue us”.
Why don’t we try selling tickets to watch a Disney movie “just one time” and see how well that goes. I don’t think Disney’s lawyers will look at it and say “oh well they decided not to do it again.“
Does that mean if cosplayers dress up like some other character, they can use that version of the character in their games/media? I think it should be equally simple to settle. It's different if it's their natural voice. Even then, it brings into question whether they can use "doppelgangers" legally.
It’s not like Tom Waits ever wanted to hock chips
https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...
The discovery process may help figuring the intent - especially any internal communication before and after the two(!) failed attempts to get her sign-off, as well as any notes shared with the people responsible for casting.
It'll be interesting to see what happens if she sues and refuses to settle.
If someone clones a random person's voice for commercial purposes, the public likely has no idea who the voice's identity is. Consequently, it's just the acoustic voice.
If someone clones a famous media celebrity's voice, the public has a much greater chance of recognizing the voice and associating it with a specific person.
Which then opens a different question of 'Is the commercial use of the voice appropriating the real person's fame for their own gain?'
Add in the facts that media celebrities' values are partially defined by how people see them, and that they are often paid for their endorsements, and it's a much clearer case that (a) the use potentially influenced the value of their public image & (b) the use was theft, because it was taking something which otherwise would have had value.
Neither consideration exists with 'random person's voice' (with deference to voice actors).
* Defined as 'someone for whom there is an expectation that the general public would recognize their voice or image'
Hell they may sue on their own or join as another damaged party.
But ultimately I'm also not sure. There are some differences that courts could find important. I hope she sues and refuses to settle, so we can find out!
Their hubris will walk them right into federal prison for fraud if they’re not careful.
If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
Someone from OpenAi hired the agency who hired the voice talent (or talents) for the voice data. They sent them a brief explaining what they are looking for, followed by a metric ton of correspondence over samples and contracts and such.
If anywhere during those written communications anyone wrote “we are looking for a ScarlettJ imitator”, or words to that effect, that is not good for OpenAI. Similarly if they were selecting between options and someone wrote that one sample is more Johansson than an other. Or if anyone at any point asked if they should clear the rights to the voice with Johansson.
Those are the discovery findings which can sink such a defense.
In the not so distant future, when the world's top AI models can generate endless accents and voices at will, the probability of one of those sounding just like you (and thousands of other people) will be high. It will be VERY high.
All this dealing with Hollywood and music industry and all the crap i've been reading about OpenAI trying to wiggle their way into those industries is absolute damn nonsense. What is Sama thinking?! GO BACK TO BEING NERDS AND STFU.
If you really believe you are going to create a real AGI, none of this is relevant. No one is going to thank you for creating something that can replicate what they value in seconds. Do it anyway.
And remember, STFU.
There have been several legal cases where bands have sued advertisers for copying their distinct sound. Here are a few examples:
The Beatles vs. Nike (1987): The Beatles' company, Apple Corps, sued Nike and Capitol Records for using the song "Revolution" in a commercial without their permission. The case was settled out of court.
Tom Waits vs. Frito-Lay (1988): Tom Waits sued Frito-Lay for using a sound-alike in a commercial for their Doritos chips. Waits won the case, emphasizing the protection of his distinct voice and style.
Bette Midler vs. Ford Motor Company (1988): Although not a band, Bette Midler successfully sued Ford for using a sound-alike to imitate her voice in a commercial. The court ruled in her favor, recognizing the uniqueness of her voice.
The Black Keys vs. Pizza Hut and Home Depot (2012): The Black Keys sued both companies for using music in their advertisements that sounded remarkably similar to their songs. The cases were settled out of court.
Beastie Boys vs. Monster Energy (2014): The Beastie Boys sued Monster Energy for using their music in a promotional video without permission. The court awarded the band $1.7 million in damages.
That actually seems like there may be a few people involved and one of them is a cowboy PM who said fuck it, ship it to make the demo. And then damage control came in later. Possibly the PM didn't even know about the asks for permission?
but did openAI make any claims about whose voice this is? Just because a voice sounds similar or familiar, doesn't mean it's fraudulent.
Bette Midler was able to sue Ford Motor Co. for damages after they hired a sound-alike voice: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co. Ford had acquired the rights to the song (which Midler didn't write).
> - Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
Voices are really hard for people to distinguish as being a certain person without priming, so really she's doing for free the advertising they were hoping she'd do for pay
So it’s legal to hire someone who sounds like SJ. And likely legal to create a model that sounds like her. But there will likely need to be some disclaimer saying it’s not her voice.
I expect that OpenAI’s defense will be something like “We wanted SJ. She said no, so we made a voice that sounded like her but wasn’t her.” It will be interesting to see what happens.
AI models exist to make up bullshit that fills a gap. When you have a conversation with any LLM it's merely autocompleting the next few lines of what it thinks is a movie script.
They changed the voice to intone like Scarlett Johansson's character. It's like they changed the song the voice was singing to one that lots of people recognise.
Bon mots apart, he really appears to have an innate capacity for betrayal.
http://law2.umkc.edu/faculty/projects/ftrials/communications...
They have no moat, they can't fix hallucinations, and people are starting to realize it's nowhere near as useful or close to AGI as he's been saying. If they hate him too, this ship is sunk.
What a bloody arrogant idiot.
That’s why she was the voice actor for the AI voice in Her.
Sam should be ashamed to have ever thought of ripping off anyone's voice, let alone done it and rolled it out.
They are building some potentially world-changing technology, but cannot rise above being basically creepy rip-off artists. Einstein was right about requiring improved ethics to meet new challenges, and also that we are not meeting that requirement.
sad to see
1. https://apnews.com/article/hollywood-ai-strike-wga-artificia...
And promoted it using a tweet naming the movie that Johansson performed in, for the role that prompted them to ask her in the first place.
You have to be almost deliberately naive to not see that the were attempting to use her vocal likeness in this situation. There’s a reason they immediately walked it back after the situation was revealed.
Neither a judge, nor a jury, would be so willingly naive.
If the voice was only trained on the voice of the character she played in Her, would she have any standing in claiming some kind of infringement?
The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.
[0] https://www.businessinsider.com/sam-altman-openai-artificial...
It's a "I know it when I see it" situation so it's not clear cut.
Maybe (maybe!) it’s worth it for someone like Johansson to take on the cost of that to vindicate her rights—but it’s certainly not the case for most people.
If your rights can only be defended from massive corporations by bringing lawsuits that cost hundreds of thousands to millions of dollars, then only the wealthy will have those rights.
So maybe she wants new legislative frameworks around these kind of issues to allow people to realistically enforce these rights that nominally exist.
For an example of updating a legislative framework to allow more easily vindicating existing rights, look up “anti-SLAPP legislation”, which many states have passed to make it easier for a defendant of a meritless lawsuit seeking to chill speech to have the lawsuit dismissed. Anti-SLAPP legislation does almost nothing to change the actual rights that a defendant has to speak, but it makes it much more practical for a defendant to actually excercise those rights.
So, the assumption that a call for updated legislation implies that no legal protection currently exists is just a bad assumption that does not apply in this situation.
Doesn't sound like they have that either.
He’s absolutely been the bad guy all along.
Like some intern’s idea to train the voice on their favorite movie.
And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
This could be a well-planned opening move of a regulation gambit. But unlikely.
Was quickly apparent that text only is a poor medium for the variety and scope of signals that could be communicated by these multimodal networks.
Many things that are legal are of questionable ethics. Asking permission could easily just be an effort for them to get better samples of her voice. Pulling the voice after debuting it is 100% a PR response. If there's a law that was broken, pulling the voice doesn't unbreak it.
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.
I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.
So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.
What's likely different is that GPT-4o can output the tonality instructions for text to speech now.
It's probably the same voice, but different instructions for generations. One was without tonal indicators, one with.
If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]
The replies to Altman's message showed readers did connect it to the film. And people noticed the voice sounded like Scarlett Johansson and connected it to the film when OpenAI introduced it in September.[1]
How do you believe Altman intended people to interpret his message?
[1] https://www.reddit.com/r/ChatGPT/comments/177v8wz/i_have_a_r...
Very interesting to see this there. Does anyone know how could that be legislated?
Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.
What I wouldn't do is use anything that remotely sounds famous. And I would definitely not use someone that said "no thanks" beforehand. And I would under no circumstances send emails or messages suggesting staff create a voice that sounds like someone famous. Then, and only then, would I feel safe in marketing a fake voice.
1) Tom Waits vs Frito-Lay: Frito-Lay not only used a soundalike to Tom Waits, but the song they created was extremely reminiscent of "Step Right Up" by Waits.
2) Bette Midler vs. Ford Motor Company: Same thing - this time Ford literally had a very Midler-esque singer sing an exact Midler song.
3) Beastie Boys vs. Monster Energy: Monster literally used the Beastie Boys' music, because someone said "Dope!" when watching the ad and someone at Monster took that to mean "Yes you can use our music in the ad".
Does Scarlett Johansson have a distinct enough voice that she is instantly recognizable? Maybe, but, well, not to me. I had no clue the voice was supposed to be Scarlett's, and I think a lot of people who heard it also didn't think so either.
In the case of acquiring a likeness, if it's done legally you acquire someone else's likeness that happens to be shared with your target.
The likeness is shared and non-unique.
If you objective is to take someone's life, there is no other pathway to the objective but their life. With likeness that isn't the case.
This is a dangerous way of thinking about people who disagree with you, because once you decide somebody is stupid, it frees you from ever having to seriously weigh their positions again, which is a kind of stupidity all its own.
The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.
But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.
Couldn't, perhaps, one of the more famous people on Earth be responsible for "meaningfully" taking OpenAI to task for this? Perhaps even being the impetus for legislative action?
There’s also a really good chance this is in some way a deepfake. Would be interesting to see this get examined by courts.
That falls under copyright, trademarks, ...
It absolutely is if you've seen /Her/. It even nails her character's borderline-flirty cadence and tone in the film.
Her voice alone didn’t get her there — she did. That’s why celebrities are so protective about how their likeness is used: their personal brand is their asset.
There’s established legal precedent on exactly this—even in the case they didn’t train on her likeness, if it can reasonably be suspected by an unknowing observer that she personally has lent her voice to this, she has a strong case. Even OpenAI knew this, or they would not have asked in the first place.
Consider the hypothetical: EvilAI, Inc. would secretly like to piggyback on the success of Her. They hire Nancy Schmo for their training samples. Nancy just happens to sound mostly like Scarlett.
No previous negotiations, no evidence of intentions. Just a "coincidental" voice doppelganger.
Does Scarlett own her own voice more than Nancy owns hers?
Put another way: if you happen to look like Elvis, you're not impersonating him unless you also wear a wig and jumpsuit. And the human look-space is arguably much bigger than the voice-space.
FWIW, I would find it very surprising if you could get the low latency expressiveness, singing, harmonizing, sarcasm and interpretation of incoming voice through SSML -- that would be a couple orders of magnitude better than any SSML product I've seen.
Conman plain and simple.
When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?
Maybe if they replicated a scene from the A's movies or there was striking likeness between the voices... but not generally.
But I’ve defended them from unfair criticism on more than a few occasions and I feel that of all the things to land on them about this one is a fairly mundane screwup that could be a scrappy PM pushing their mandate that got corrected quickly.
The leadership for the most part scares the shit out of me, and clearly a house-cleaning is in order.
But of all the things to take them to task over? There’s legitimately damning shit this week, this feels like someone exceeded their mandate from the mid-level and legal walked it back.
If a PM there didn’t say “fuck it ship it even without her permission” they’d probably be replaced with someone who would.
I expect the cost of any potential legal action/settlement was happily accepted in order to put on an impressive announcement.
But if they concocted a fake voice to sound as much like her as possible, that’s not really better.
Altman’s tweet, combined with previous statements Her is his favorite movie, and trying to secure rights twice looks really really damning.
> so really she's doing for free the advertising they were hoping she'd do for pay
They didn’t want her to advertise for them. They wanted to use her voice. Do you not see a difference?
I think the copyright industry wants to grab new powers to counter the infinite capacity of AI to create variations. But that move would knee cap the creative industry first, newcomers have no place in a fully copyrighted space.
It reminds me of how NIMBY blocks construction to keep up the prices. Will all copyright space become operated on NIMBY logic?
When the offer was declined by scarjo, they could still train on her works of art and just hire a soundalike to make recordings regardless of whether they used it during training.
Then, at release time - either they get the buzz of artist-licensed "Her" or they get the buzz /outrage/Streisand of unlicensed "Her". Even if they take it down, OpenAI benefits.
I feel like the folks who fear the tech are wrong. But when the supposed stewards do such a moustache-twirling announcement, it seems like maybe we do need some restraint.
If a trade group can't put some kind of goodwill measures in place, we will inevitably end up with ham fisted legislation.
Because then the actual case would be fairly bizarre: an entirely separate person, selling the rights to their own likeness as they are entitled to do, is being prohibited from doing that by the courts because they sound too much like an already famous person.
EDIT: Also up front I'm not sure you can entirely discuss timelines for changing out technology here. We have voice cloning systems that can do it with as little as 15 seconds of audio. So having a demo reel of what they wanted to do that they could've used on a few days notice isn't unrealistic - and training a model and not using it or releasing it also isn't illegal.
For me to believe this was genius I'd have to see some actual response from Sam. From the outside-looking-in, it appears that he was caught with his pants down when Jonhansson said no and went ahead even though he was rejected a second time and obviously knew it was the wrong choice. There's no Streisand effect at play here, OpenAI already owned the news cycle with their 4o announcement and could have kept it quiet. But Sam just had to have his One More Thing, and now he's getting his just deserts.
OpenAI didn't just use a voice like Scarlett Johansson's. They used it in an AI system they wanted people to associate with AI from movies and the movie where Johansson played an AI particularly.[1][2]
Buckle in, go to court, and double-down on the fact that the public's opinion of actors is pretty damn fickle at the best of times - particularly if what you released was in fact based on someone you signed a valid contract with who just sounds similar.
Of course, this is all dependent on actually having a complete defense of course - you absolutely would not want to find Scarlett Johannsen voice samples in file folders associated with the Sky model if it went to court.
If they tell the story of OpenAI, in a way that reaches people, that would be a triumph of the real artists, over the dystopian robo-plagiarists.
I love it already.
The real problem, now, is that they don't have a nice working voice anymore.
It sounds like Altman was personally involved in recruiting her. She said no and they took what they wanted anyway.
I think a lot of people are wondering about a situation (which clearly doesn’t apply here) in which someone was falsely accused of impersonation based on an accidental similarity. I have more sympathy for that.
But that’s giving OpenAI far more than just the benefit of the doubt: there is no doubt in this case.
You need to be honest about what it actually does then. Cherry picking the thing you don't like and ignoring the rest will bring you no closer to true understanding
They literally hired an impersonator, and it cost them 2.5 million (~6 million today).
https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...
I’m not writing the guy a pass, he’s already been fired for what amount to ethics breaches in the last 12 months alone. Bad actor any way you look at it.
But I spent enough time in BigCo land to know stuff like this happens without the CEO’s signature.
I’d say focus on the stuff with clear documentary evidence and or credible first-hand evidence, there’s no shortage of that.
I get the sense this is part of an ambient backlash now that the damn is clearly breaking.
Of all the people who stand to be harmed by that leadership team, I think Ms. Johansson (of who I am a fan) is more than capable of seeing her rights and privileges defended without any help from this community.
He's using trade secrets, copyright, patents, NDAs liberally.
This is not a principled stand, just opportunism.
Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.
I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.
People who hate Hollywood? Most of that crowd hates tech even more.
* Because it would take the first news cycle to be branded as that
"…two separate people from my label told me that they had personally talked to Coolio… and that he told them that he was okay with the whole parody idea…Halfway into production, my record label told me that Coolio’s management had a problem with the parody, even though Coolio personally was okay with it. My label told me… they would iron things out — so I proceeded with the recording and finished the album."
https://www.vulture.com/2011/12/gangstas-parodist-revisiting...
> The new board should act
You mean like the last board tried? Besides the board was picked to be on Altman’s side. The independent members were forced out.
The worst of it is not that this one person is being ripped off (that's bad enough and I hope she gets some kind of resolution). The worst of it is that it shows the company and the people behind it who are making the big decisions are dishonest and unethical.
All the alleged "safety" experts in corporations and in government policy and regulators? All bullshit. The right way to read any of these "safety" laws and policies and regulators is that they are about ensuring the safety of the ruling class.
I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.
I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.
The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.
And really, how much worse would the demo have been if they hadn't cloned Johansson's voice, and instead used another unknown voice? If it was similarly flirty, we'd have fallen for it anyways.
Sam doesn’t care.
After the board threw him out of his own company, why would he allow that to happen again? With that, he now trusts far much less people.
> Money really does blind all our senses, doesn't it?
That is why the cultishness was full on display last year when he was fired by the board.
Hiring someone with a voice you want isn't illegal; hiring someone with a voice you want because it is similar to a voice that someone expressly denied you permission to use is illegal.
Actually, it's so foundational to the common law legal system that there's a specialized Latin term to represent the concept: mens rea (literally 'guilty mind').
Imo Sky's voice is distinct enough from Scarlett, and it wasn't implied to _be_ her.
Sam's "Her" tweet could be interpreted as such, but defending the tweet as the concept of "Her", rather than the voice itself, is.
From elsewhere in the thread, likeness rights apparently do extend to intentionally using lookalikes / soundalikes to create the appearance of endorsement or association.
The fact Johansson did not give permission to OpenAI to use their voice, they then hired a voice actor to similarly copy her voice likeliness with Altman tweeting a reference to the film ‘Her’ which Johansson was the starring voice actor in that film, tells you that OpenAI intended to clone and use her voice even without permission.
OpenAI HAD to pull the voice down to not risk yet another lawsuit.
The parent comment clearly has the weakest defense I have seen on this discussion.
Given the timeline it sounds like the PM was told "just go ahead with it, I'll get the permission".
Both sides of the story feel like we're slowly being brought to a boil: Sutskever's leaving feels like it was just a matter of time. His leaving causing a mess seems predictable. Perhaps I am numb to that story.
But stealing a large part of someone's identity after being explicitly told not to? This one act is not the end of the world, but feels like an acceleration down a path that I would rather avoid.
I’d wager that most senior+ engineers or product people also have equally compelling “the vision”s.
The difference is that they need to do actual work all day so they don’t get to sit around pontificating.
Having an NDA in exit terms you don’t get to see until you are leaving that claim ability to claw back your vested equity if you don’t agree seems more severely unethical, to be sure. But that doesn’t mean there’s more reason to blame it on Altman specifically. Or perhaps you take the stance that it reflects on OpenAI and their ethics whether or not Altman was personally involved, but then the same applies to the voice situation.
Look kind of similar right? Lot of familiar styling queues? What would take it from "similar" to actual infringement? Well if you slapped an Apple Logo on there, that would do it. Did OpenAI make an actual claim? Did they actually use Scarlett Johannson's public image and voice as sampling for the system?
[1] https://images.prismic.io/frameworkmarketplace/25c9a15f-4374...
[2] https://i.dell.com/is/image/DellContent/content/dam/ss2/prod...
[3] https://cdn.arstechnica.net/wp-content/uploads/2023/06/IMG_1...
https://podcasts.apple.com/us/podcast/an-ex-cia-officer-expl...
There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:
https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...
>If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.
>When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...
I don't think it's that unsettled, at least not legally. There seems to be precedent for this sort of thing (cf. cases involving Bette Midler or Tom Waits).
I think the hypothetical you create is more or less the same situation as what we have now. The difference is that there maybe isn't a paper trail for Johansson to use in a suit against EvilAI, whereas she'd have OpenAI dead to rights, given their communication history and Altman's moronic "Her" tweet.
> Does Scarlett own her own voice more than Nancy owns hers?
Legally, yes, I believe she does.
And here's some caselaw where another major corporation got smacked down for doing the exact same thing: https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.
But given how unscrupulous Sam Altman appears to be, I wouldn't be surprised if OpenAI hired an impersonator as some kind half-ass legal cover, and went about using Johansson's voice anyway. Tech people do stupid shut sometimes because they assume they're so much cleverer than everyone else.
But I'm not a lawyer of any sort either, so... ::shrug::
Ah, the famous rogue engineer.
The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.
> And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.
> This could be a well-planned opening move of a regulation gambit. But unlikely.
LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.
Yes, it will be interesting in June 1988 when we will find out "where this lands": https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.
i dont see why he should be in jail
E.g. flying Congress to Lake Cuomo for an off-the-record “discussion” https://freebeacon.com/politics/how-the-aspen-institute-help...
The top comment in this thread is crazy too, they probably contacted her two days prior to launch on the off chance that they could use her as a marketing puppet.
Lost for words on this one.
I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.
In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.
> Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.
So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.
I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.
https://www.opensecrets.org/federal-lobbying/clients/summary...
It’s interesting to see how it unfolding.
Maybe I liked it best because it felt familiar, even if I didn’t know why. I’m a bit disappointed now that she didn’t sign on officially, but my guess is that Altman just burned his bridge to half of Hollywood if he is looking for a plan B.
It will come down to what makes the complaining celebrity's voice iconic, which for Scarjo is the 'gravelly' bit. Which smooth Sky had none of.
[1] actress reading poem: https://www.youtube.com/watch?v=eWEEAjRFJKc
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus
https://x.com/AlexBlechman/status/1457842724128833538?lang=e...
Assuming Sam Altman is not stupid, this could be part of some elaborate plan and a calculated strategy. The end goals could range from immediate practical outcomes like increased publicity (see ChatGPT's mobile app revenue doubled overnight: https://finance.yahoo.com/news/chatgpts-mobile-app-revenue-s...) and market impact, to more complex objectives like influencing future legal frameworks and societal norms around AI.
Plus the tone of the voice is likely an unimportant detail to theor success. So pushing up against the legal boundaries in this specific domain is at best strange and at worst a huge red flag for their ethics and how they operate.
Apparently they had no confidence in defending themselves, so why even release with the voice in the first place?
Explicit brand reference? Bad. Circumstantial insinuation? Let it go.
It's a Musk-error not an SBF-error. (Of course, I do realise many will say all three are the same, but I think it's worth separating the types of mistakes everyone makes, because everyone makes mistakes, and only two of these three also did useful things).
Otherwise, it'd be impossible to show damages if you weren't personally being denied business because of the association.
https://twitter.com/OpenAI/status/1790089521985466587
Giggly, flirty AI voice demos were already weird, but now it's even creepier knowing the backstory of how they try to get their voices.
To me, that reads like the same kind of snake oil he sold Elon when he proposed the joint founding of OpenAI.
I can just about imagine the books in his private library. The Prince. 48 Laws of Power. Win Friends and Inference People.
Sufficiently advanced incompetence is indistinguishable from malice.
OpenAI first demoed and launched the “Sky” voice in November last year. The new demo doesn’t appear to have a new voice.
I doubt it would take them long to prepare a new voice, and who’s to say they wouldn’t delay the announcements for a ScarJo voice?
A charitable interpretation of the “her” tweet would be a comparison to the conversational and AI capabilities of the product, not the voice specifically, but it’s certainly not a good look.
sama tweet after the demo + SJo's press release + OpenAI not even risking it and pulling out the voice from ChatGPT should raise enough doubts if anything.
Why does everything keep getting worse? Why do people keep making less? We need to figure out the answers to these questions. And no, nobody here knows them.
Worldcoin is centrally controlled making it a classic "scam coin". Decentralization is the _only_ unique thing about cryptocurrencies, when you abandon decentralization all that's left is general scamminess.
(Yes, there's nuance to decentralization too but that's not what's going on with Worldcoin.)
ChatGPT using Sky voice (not 4o - original release): https://youtu.be/JmxjluHaePw?t=129
Samantha from "Her" (voiced by ScarJo): https://youtu.be/GV01B5kVsC0?t=134
Rashida Jones Talking about herself https://youtu.be/iP-sK9uAKkM
I challenge anyone to leave prejudice at the door by describing each voice in totality first and seeing if your descriptions overlap entirely with others. They each have an obvious unique whispiness and huskiness to them.
Which is what this would be in the not-stupid version of events: they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".
The issue of course is that OpenAIs reactions so far don't seem to indicate that they're actually confident they can prove this or that this is the case. Coz if this is actually the case, they're going about handling this in the dumbest possible way.
Please point to a case where someone was successfully sued for sounding too much like a celebrity (while not using the celebrity's name or claiming to be them).
The public hardly heard from or saw the mgmt of these firm in media until shit hit the fan.
Today it feels like managment is in the media every 3 hours trying to capture attention of prospective customers, investors, employees etc or they loose out to whoever is out there capturing more attention.
So false and condradictory signalling is easy to see. Hopefully out of all this chaos we get a better class of leaders not a better class of panderers.
The biggest problem on that front (assuming the former is not true) is Altman's tweets, but court-wise that's defensible (though I retract what I had here previously - probably not easily) as a reference to the general concept of the movie.
Because otherwise the situation you have is OpenAI seeking a particular style, hiring someone who can provide it, not trying to pass it off as that person (give or take the Tweet's) and the intended result effectively being: "random voice actress, you sound too much like an already rich and famous person. Good luck having no more work in your profession" - which would be the actual outcome.
The question entirely hinges on, did they include any data at all which includes ScarJo's voice samples in the training. And also whether it actually does sound similar enough - Frito-Lay went down because of intent and similarity. There's the hilarious outcome here that the act of trying to contact ScarJo is the actual problem they had.
EDIT 2: Of note also - to have a case, they actually have to show reputational harm. Of course on that front, the entire problem might also be Altman. Continuing the trend I suppose of billionaires not shutting up on Twitter being the main source of their legal issues.
She was used in Her because she has a dry/monotone/lifeless form of diction that at the time seemed like a decent stand-in for an non-human AI.
IMDB is riddled with complaints about his vocal-style/diction/dead-pan on every one of her movies. Ghost World, Ghost in the Shell, Lost in Translation, Comic-Book-Movie-1-100 -- take a line from one movie and dub it across the character of another and most people would be fooled, that's impressive given the breadth of quality/style/age across the movies.
When she was first on the scene I thought it was bad acting, but then it continued -- now I tend to think that it's an effort to cultivate a character personality similar to Steven Wright or Tom Waits; the fact that she's now litigating towards protection of her character and likeness reinforces that fact for me.
It's unique to her though , that's for sure.
If this isn't a smoking gun, I don't know what it.
I think people forget the last part of the definition, though. A Smoking gun is about as close as you get without having objective, non-doctored footage of the act. There's a small chance the gun is a red herring, but it's still suspicious.
Frito-Lay copied a song by Waits (with different lyrics) and had an impersonator sing it. Witnesses testified they thought Waits had sung the song.
If OpenAI were to anonymously copy someone's voice by training AI on an imitation, you wouldn't have:
- a recognizable singing voice
- music identified with a singer
- market confusion about whose voice it is (since it's novel audio coming from a machine)
I don't think any of this is ethical and think voice-cloning should be entirely illegal, but I also don't think we have good precedents for most AI issues.
The scenario would have been that they approach none.
This may turn out to be something they can’t just buy their way out of with no other consequences.
"Each actor receives compensation above top-of-market rates, and this will continue for as long as their voices are used in our products."
https://openai.com/index/how-the-voices-for-chatgpt-were-cho...
Not sure if this is royalties, but it seems like there's some form of long term compensation. But it's a little vague so not sure.
This voice: https://x.com/OpenAI/status/1790072174117613963
I don't think the issue is that Vision doesn't matter. I think the issue is Sam doesn't have it. Like Gates and Jobs had clear, well defined visions for how the PC was going to change the world, then rallied engineering talent around them and turned those into reality, that's how their billions and those lasting empires were born. Maybe someone like Elon Musk is a contemporary example. Just don't see anything like that from SamA, we see him in the media, talking a lot about AI, rubbing shoulders with power brokers, being cutthroat, but where's the vision of a better future? And if he comes up with one does he really understand the engineering well enough to ground it in reality?
To me it's about as close to her voice as saying "It's a woman's voice". Not to say all women sound alike but the sound I heard from that video above could maybe best be described and "generic peppy female American spokesperson voice"
Even listening to it now with the suggestion that it might sound like her I don't personally hear Scarlett Johansson's voice from the demo.
There may be some damming proof where they find they sampled her specifically but saying they negotiated and didn't come to an agreement is not proof that it's supposed to be her voice. Again, to me it just sounds like a generic voice. I've used the the version before GPT-4o and I never got the vibe it was Scarlett Johansson.
I did get the "Her" vibe but only because I was talking to a computer with a female voice and it was easy to imagine that something like "Her" was in the near future. I also imagined or wished that it was Majel Barrett from ST:TNG, if only because the computer on ST:TNG gave short and useful answers where as ChatGPT always gives long-winded repetitive annoying answers
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
You realise that there are multiple employees including the CEO publicly drawing direct comparisons to the movie Her after having tried and failed twice to hire the actress who starred in the movie? There is no non idiotic reading of this.
People cheering for this sort of copyright are completely lost imo. That's not a world anyone but the select few wants to live in.
Nevertheless that's not what happened here.
We have 8 billion people, probability of unique voice and intonation is extremely unlikely. Imagine someone else owning your voice. Someone much richer and more powerful. No entertainment is worth putting fellow human beings through such discrimination and cruelty.
Take your point about LLMs though.
It sure was. But OpenAI decided to poke the Bear and is being sued by NYT. And apparently as a sidequest they thought it best to put their head in a lion's mouth. I wouldn't call the PR clout and finances of an A-list celebrity small potators.
They could have easily flown under the radar and have been praised as the next Google if they kept to petty thievery on the internet instead of going for the high profile content.
>People are still going to use ChatGPT to cheat on their homework, to phone-in their jobs, and to try to ride OpenAI's coattails.
Sure, and ChatGPT isn't goint to make lots of money from these small time users. They want to target corporate, and nothing scares of coporate more than pending litigation. So I think this will bite them sooner rathter than later.
>Maybe OpenAI will eventually settle with the actress, for a handful of coins they found in the cushions of their trillion-dollar sofa.
I suppose we'll see. I'm sure she was offered a few pennies as is, and she rejected that. She may not be in it for the money. She very likely doesn't need to work another day in her life as is.
However, GP practices are essentially privatised - so you do have the right to register at another practice.
Whelp. Let us see if this one sticks.
That's what I'm discussing.
Edit: which is to say, I think Sam Altman may have been a god damn idiot about this, but it's also wild anyone thought that ScarJo or anyone in Hollywood would agree - AI is currently the hot button issue there and you'd find yourself the much more local target of their ire.
Which one?
on edit: this being based on American legal system, you may come from a legal system with different rules.
Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.
It's becoming an extremely apparent bubble.
It's still bad, don't get be wrong, it's just something I can distinguish.
Why be cartoonishly stupid and cartoonishly arsehole and steal a celebrity’s voice? Did he think Scarlett won’t find out? Or object?
I don’t understand these rich people. Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos
It's a thing you put on your phone
I don't have a phone
Well, we can't register you
You don't accept people who don't have phones? Could I have that in writing please, ..., oh, your signature on that please ...
Considering the movie's 11 years old, it's surprisingly on-point with depictions of AI/human interactions, relations, and societal acceptance. It does get a bit speculative and imaginative at the end though...
But I imagine that movie did/does spark the imagination of many people, and I guess Sam just couldn't let it go.
Correcting, the thing about this whole situation with OpenAI is they are willing to steal everything for use in ChatGPT. They trained their model with copyrighted data and for some reason they won't delete the millions of protected data they used to train the AI model.
Decentralisation allows trust-less assurance that money is sent, it's just that's not useful because the goods or services for which the money is transferred still need either trust or a centralised system that can undo the transaction because fraud happened.
That's where smart contracts come in, which I also think are a terrible idea, but do at least deserve a "you tried!" badge, because they're as dumb as saying "I will write bug-free code" rather than as dumb as "let's build a Dyson swarm to mine exactly the same amount of cryptocurrency as we would have if we did nothing".
...it wouldn't make any difference.
A Barack Obama figurine is a Barack Obama figurine, no matter how much you say that it's actually a figurine of Boback O'Rama, a random person that coincidentally looks identically to the former US President.
So he's here to help regulate it all with an "international agency" (see the reference[2] by windexh8er in this thread)! Don't forget that Altman is the same hack who came up with "Worldcoin" and the so-called "Orb" that'll scan your eyeballs for "proof of personhood".
Is this sleazy marketer the one to be trusted to lead an effort that has a lasting impact on humanity? Hell no.
[1] >>38312294
[2] >>40423483
Company identifies celebrity voice they want. (Frito=Waits, OpenAi=ScarJo)
Company comes up with novel thing for the the voice to say. (Frito=Song, OpenAI=ChatGpt)
Company decides they don’t need the celebrity they want (Frito=Waits, OpenAI=ScarJo) and instead hire an impersonator (Frito=singer, {OpenAI=impersonator or OpenAI=ScarJo-public-recordings}) to get what they want (Frito=a-facsimile-of-Tom-Waitte’s-voice-in-a-commercial, OpenAi=a-fascimilie-of-ScarJo’s-voice-in-their-chatbot)
When made public, people confuse the fascimilie as the real thing.
I don’t see how you don’t see a parallel. It’s literally best for beat the same, particularly around the part about using an impersonator as an excuse.
>The Ninth Circuit reversed the District Court, finding that White had a cause of action based on the value of her image, and that Samsung had appropriated this image. Samsung's assertion that this was a parody was found to be unavailing, as the intent of the ad was not to make fun of White's characteristics, but to sell VCRs.
https://en.wikipedia.org/wiki/White_v._Samsung_Electronics_A....
Maybe it depends on which court will handle the case, but OpenAI's core intent isn't parody, but rather to use someone's likeness as a way to make money.
(I am not a lawyer)
The court determined that Midler should be compensated for the misappropriation of her voice, holding that, when "a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."
I hope there's going to be no further hypotheticals after this.
-----
>They're doing something with a voice that some claim sounds like hers.
Yes, that's what a likeness is.
If you start using your own paintings of Taylor Swift in a product without her permission, you'll run afoul of the law, even though your painting is obviously not the actual Taylor Swift, and you painted it from memory.
>But if it isn't, then it is more like selling a figurine called Sally that happens to look a lot like Taylor Swift. Sally has a right to exist even if she happens to look like Taylor Swift.
Sally has a right to exist, not the right to be distributed, sold, and otherwise used for commercial gain without Taylor Swift's permission.
California Civil Code Section 3344(a) states:
Any person who knowingly uses another’s name, voice, signature, photograph, or likeness, in any manner, on or in products, merchandise, or goods, or for purposes of advertising or selling, or soliciting purchases of, products, merchandise, goods or services, without such person’s prior consent, or, in the case of a minor, the prior consent of his parent or legal guardian, shall be liable for any damages sustained by the person or persons injured as a result thereof.
Note the word "likeness".
Read more at [1] on Common Law protections of identity.
>Has there ever been an up and coming artist who was not allowed to sell their own songs, because they happened to sound a lot like an already famous artist? I doubt it.
Wrong question.
Can you give me an example of an artist which was allowed to do a close-enough impersonation without explicit approval?
No? Well, now you know a good reason for that.
Tribute bands are legally in the grey area[2], for that matter.
[1] https://www.dmlp.org/legal-guide/california-right-publicity-...
[2] https://lawyerdrummer.com/2020/01/are-tribute-acts-actually-...
[3] https://repository.law.miami.edu/cgi/viewcontent.cgi?article...
https://www.quimbee.com/cases/waits-v-frito-lay-inc
https://en.wikipedia.org/wiki/White_v._Samsung_Electronics_A....
This is a bit unfair. Some people left OpenAI on the ground of ethics, because they were unsatisfied with how this supposed nonprofit operates. The ethics was there, but OpenAI got rid of it.
Hurray, OpenAI has found a new lucrative market. Horny incels.
[1] https://forums.theregister.com/forum/all/2024/05/21/scarlett...
> Cool story bro.
> Except I could never have predicted the part where you resigned on the spot :)
> Other than that, child's play for me.
>Thanks for the help. I mean, thanks for your service as CEO.
It's an excellent book, and so so many of the issues raised in it are playing out blow-by-blow.
It would be harder to find a case if they simply just hired someone that sounds similar, but if they did that with the intention to sound like the original that's still impersonation, only it's harder to prove.
If they just happened to hire someone that sounded like the original, then that's fair game IMO.
IANAL
Who is the underdog in this situation? In your comment it seems like you're framing OpenAI as the underdog (or perceived underdog) which is just bonkers.
Hacker News isn't a hivemind and there are those of us who work in GenAI who are firmly on the side of the creatives and gasp even rights holders.
Honestly though, if you actually listen to him and read his words he seems to be even more devoid of basic empathetic human traits than even Zuckerberg who gets widely lampooned as a robot or a lizard.
He is a grifter through-and-through.
Seems like they abandoned it pretty early - if it was real in the first place.
You just have made up an argument. There is no stated nor implied stupidity.
You can't dismiss critique of carelessness like that.
Are you suggesting they should have engineered the voice actress' voice to be more distinct from another actress they were considering for the part? Or just not gone near it with a 10ft pole? because if the latter the studios can just release a new Her and Him movie with different voices in different geo regions and prevent anyone from having any kind of familiar engaging voice bot.
How much do you think Disney or Universal Music or Google or NYT would give to peek inside OpenAI's training mixture to identify all the infringing content?
But it can be learned to be mimicked almost to perfection, either by endless trial & error or by highly intelligent motivated people. It usually breaks apart when completely new intense / stressful situation happens. Sociopaths belong here very firmly and form majority.
If you know what to look for, you will see it in most if not all politicians, 'captains of industry' or otherwise people who got to serious power by their own deeds.
Think about a bit - what sort of nasty battles they had to continually keep winning with similar folks to get where they are, this ain't the place for decent human beings, you/me would be outmatched quickly. Jordan Peterson once claimed you have cca 1/20 of sociopaths in general population, say 15 millions just in US? Not every one is highly intelligent and capable of getting far, but many do. Jobs, Gates, Zuckenberg, Bezos, Musk, Altman and so on and on. World is owned and run by them, I'd say without exception.
This isn't actually complicated at all. OpenAI robbed her likeness against her express will.
There are quite a few issues here: First, this is assuming they actually hired a voice-alike person, which is not confirmed. Second, they are not an underdog (the voice actress might be, but she's most likely pretty unaffected by this drama). Finally, they were clearly aiming to impersonate ScarJo (as confirmed by them asking for permission and samas tweet), so this is quite a different issue than "accidentally" hiring someone that "just happens to" sound like ScarJo.
The CTO was on stage presenting the thing and the CEO was tweeting about it.
Please explain for us which part of this is happening without the CEO's signature.
Of everyone who has been harmed and had their work stolen or copyright infringed by Sam's team, Scarlett Johansson is the one person (so far) who can actually force the issue and a change, and so the community is right to rally behind her because if they're so brazen about this, it paints a very clear picture of the disdain they hold the rest of us in.
It's funny that just seven days ago I was speculating that they deliberately picked someone whose voice is very close to Scarlett's and was told right here on HN, by someone who works in AI, that the Sky voice doesn't sound anything like Scarlett and it is just a generic female voice:
https://news.ycombinator.com/item?id=40343950#40345807
Apparently .... not.
That is indeed something it does.
But it also gives you the assurance that a single entity can't print unlimited money out of thin air, which is the case with a centrally controlled currency like Worldcoin.
They can just shrug their shoulders and claim that all that money is for the poor and gullible Africans that had their eyeballs scanned.
They seem to love "testing" how much they can bully someone.
I remember a few experiences where someone responded by being an even bigger dick, and they disappeared fast.
Do you have a source for this?
I think you are right in general here in this comment but I am not sure if you are right on this bit.
Peterson might be slightly overstating the number of sociopaths (others put it at more like one in thirty).
Those people have to fake it (if they can be bothered; it doesn't seem to hold people back from the highest office if they don't)
The vast majority of people with noticeably low empathy, though, simply haven't ever been taught how to nurture that small seed of empathy, how to use it to see the world, how to feel the reciprocal benefits of others doing the same. How to breathe it in and out, basically. It's there, like a latent ability to sing or draw or be a parent, it's just that we're not good at nurturing it specifically.
Schools teach "teamwork" instead, which is a lite form of empathy (particularly when there is an opposing team to "other")
I was never a team player, but I have learned to grow my own empathy over the years from a rather shaky sense of it as a child.
> It sure was.
Can you cite something that elaborates on this point? Do people who read books and then learn from it also disregard copyright? How is what OpenAI does meaningfully different from what people do?
Yes, I've listened to Altman. A most recent one is him waffling with a straight-face about "Platonic ideals"[1], while sitting on a royal chair in Cambridge. As I noted here[2] six months ago, if he had truly read and digested Plato's works, he simply will not be the ruthless conman he is. Plato would be turning in his grave.
[1] https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3632s
[2] >>38312875
move fast break people
hurting people is just a risk Sam Altman is willing to incorporate into his equationFor example, I would love to see all of the Bourne books adapted into live-action films, but I know that will be impossible. In the future, I believe it would be great to see some AI actors who are not related to any famous actors/actresses perform the same screenplay: of course, if the book is licensed to that AI movie.
Sure, but the inability to do that when needed is also a bad thing.
Also, single world currencies are (currently) a bad thing, because when your bit of the world needs to devalue its currency is generally different to when mine needs to do that.
But this is why economics is its own specialty and not something that software nerds should jump into like our example with numbers counts for much :D
(or they rewrite the roll)
You cannot really judge people by their public appearance, it will in most cases be a fake persona. So the diagnosis of Jobs or Zuckerberg isn't really grounded in reality if you do not know them personally.
Problem solved.
"Midler was asked to sing a famous song of hers for the commercial and refused. Subsequently, the company hired a voice-impersonator of Midler and carried on with using the song for the commercial, since it had been approved by the copyright-holder. Midler's image and likeness were not used in the commercial but many claimed the voice used sounded impeccably like Midler's."
As a casual mostly observer of AI, even I was aware of this precedent
If you actually wanted a voice assistant AI, having a giggly, chatty computer acting like it has a huge crush on you is not remotely useful in day to day real world use. Unless that's exactly what you want.
Ok? What materials would you suspect discovery can uncover from Scarlett or her team?
> was recast to someone more SoCal in post-production
Was recast to Scarlett Johansson. Hardly a good argument if you want to argue that her voice is not unique.
They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.
At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.
Also, suddenly your voice is being used to say things that you would never say. What could go wrong?
Suddenly, instead of ScarJo, the famous movie star, you are that crazy voice from OpenAI.
He lies and steals much more than that. He’s the scammer behind Worldcoin.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
> Altman is, and wants to be, a large part of AI regulation. Quite the public contradiction.
That’s as much of a contradiction as a thief wanting to be a large part of lock regulation. What better way to ensure your sleazy plans benefit you, and preferably only you but not the competition, than being an active participant in the inevitable regulation while it’s being written?
How else am I supposed to interpret this?
It’s both.
This isn’t even close to the most unethical thing he has done. This is peanuts compared to the Worldcoin scam.
Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.
Now, the NSA and crypto standards, that would have worked as a metaphor for your point.
(I don't think it's correct, but that's an independent claim, and I am not only willing to discover that I'm wrong about their sincerity, I think everyone writing that legislation should actively assume the worst while they do so).
Of course it’s not. All of ChatGPT is a ripoff. That’s what training data (which they did not license) is.
[1] Modern quack science might be worst than modern astrology/etc, since the stance of "I know it's not real, but..." means astrology folks can freely disregard horoscopes that are socially or ethically objectionable. If you're claiming your BS is actually Science, I think for a lot of people there's a sort of vicious feedback loop. Especially with social stuff, where these claims are essentially unfalsifiable. ("Body language reading is Science and cannot fail, it can only be failed by my incompetence.")
https://nytco-assets.nytimes.com/2023/12/Lawsuit-Document-dk...
The Lockpicking Lawyer is not a thief, so I don’t get your desire to incorrectly nitpick. Especially when you clearly understood the point.
> Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.
A thief is not a lock picker and they don't have the same incentive. A thief in a position to dictate lock regulation would try to have a legal backdoor on every lock in the world. One that only he has the master key for. Something something NSA & cryptography :)
She doesn't have to own anything to claim this right, if the value of her voice is recognizable.
People who want to use an actor's likeness can't get around likeness rights by saying they impersonated a specific performance actually.
"A is demonstrating a proof of B" does not require "A is a clause in B".
A being TLPL, B being that the entire lock industry is bad, so bad that anyone with experience would be a massive improvement, for example a thief.
I assume that they didn't get Scarlett's voice directly but they hired someone who sounds similar to use it for the system.
Is it illegal to hire similar sounding voice actor?
If I, for example, sound very similar to Stephen Fry, would I not be allowed to record audio books because he _owns_ his voice and any similar voice?
Sora is doing the same on YouTube videos. It blocks queries like “in the style of Wes Anderson” but it still uses that as training data and generating content.
If you've watched his videos then surely you should know that lockpicking isn't even on the radar for thieves as there are much easier and faster methods such as breaking the door or breaking a window.
Other people have commented to further explain the point in other words. I recommend you read those, perhaps it’ll make you understand.
As you state, Mrs. Johansson would benefit reputationally form the lawsuit and could act as a front for other powerful institutions who would greatly benefit from the lawsuit.
Moreover, what if some actor has a similar voice to her and records and sells an audiobook? What if this actor signs a contract with OpenAI?
Their efforts to copy the Her character mannerisms is still annoying. They were aiming for the Her-like personality. But it's a stretch to say it's a copy of Johansson's voice. The haze in her real voice isn't there in GPT. Maybe they decided to pull the voice because it's not good publicity to have Scarlett Johansson pissed off with you.
Nevertheless these tech companies should hire professional film writers and artists to help with difficult concepts such as original ideas and not copying other's work.
When and why would BTC or ETH need to print unlimited money and devalue themselves?
And the answer to that is all the reasons governments do just that, except for the times where the government is being particularly stupid and doing hyperinflation.
The other aspect is that OpenAIs infringement is open ended, in that it is not equivalent to the single use within a single movie, it is substantially more. People who want Mrs. Johansson to voice their projects could now instead use OpenAI to obtain the same likeness - so the damages are for the deprivation all the future earnings Mrs. Johansson could have made.
Then there is reputational damage, if someone using OpenAI generated and disseminated a voice message before committing a heinous crime then the public would associate the voice with the crime and Mrs. Johansson would be intrinsically linked to the same crime. People hearing her voice would be reminded of the crime. This would again prevent Mrs. Johansson from making money from her likeness but would also negatively impact all aspects of her life and all future earnings, business dealings, and personal life.
Unusually the initial remedy is an injunction to prevent further infringement but OpenAI yanked it immediately so no injunction was needed. One way to repair reputational damage is to pay for news media to widely and publicly correct the reputationally damaging falsehood. As this has been a substantial scandal that media coverage has been already given for free and it is unlikely that there is anyone left who still believes that OpenAI is using Mrs. Johansson's likeness with either tacit or explicit permission.
As stated by your peer comment there are still reasons for Mrs. Johansson to bring a lawsuit for damages even where there are no significant damages - the lawsuit itself would be damaging to OpenAI so it would be in OpenAIs interest to pay to avoid it. At the same time it would be in other large companies interests for it to continue, so there may be a bidding war on whether or not Mrs. Johansson continues with the lawsuit. In addition it would benefit Mrs. Johansson personally and professionally to be seen as a champion of the rights of artists - especially at a time when AI companies like OpenAI are trampling all over those rights.
> Something something NSA & cryptography :)
Indeed, as I said :)
Did you?
> Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.
Are you arguing that Her performance is not the resembling factor here but simply the natural voice of ScarJo at rest, disregarding Her?
Jobs responds minutes later... "Fuck the lawyers."
One very easy explanation is that they trained Sky using another voice (this is the claim and no reason to doubt it is true) wanting to replicate the stye of the voice in "Her", but would have preferred to use SJ's real voice for the PR impact that could have.
Yanking it could also easily be a pre-emptive response to avoid further PR drama.
You will obvious decide you don't believe those explanations, but to many of us they're quite plausible, in fact I'd even suggest likely.
(And none of this precludes Sam Altman and OpenAI being dodgy anyway)
My view is, of course it is ok. SJ doesn't own the right to a particular style of voice.
I thought about your comment for a while, and I agree that there is a fine line between "realistic parody" and "intentional deception" that makes deepfake AI almost impossible to defend. In particular I agree with your distinction:
- In matters involving human actors, human-created animations, etc, there should be great deference to the human impersonators, particularly when it involves notable public figures. One major difference is that, since it's virtually impossible for humans to precisely impersonate or draw one another, there is an element of caricature and artistic choice with highly "realistic" impersonations.
- AI should be held to a higher standard because it involves almost no human expression, and it can easily create mathematically-perfect impersonations which are engineered to fool people. The point of my comment is that fair use is a thin sliver of what you can do with the tech, but it shouldn't be stamped out entirely.
I am really thinking of, say, the Joe Rogan / Donald Trump comedic deepfakes. It might be fine under American constitutional law to say that those things must be made so that AI Rogan / AI Trump always refer to each other in those ways, to make it very clear to listeners. It is a distinctly non-libertarian solution, but it could be "necessary and proper" because of the threat to our social and political knowledge. But as a general principle, those comedic deepkfakes are works of human political expression, aided by a fairly simple computer program that any CS graduate can understand, assuming they earned their degree honestly and are willing to do some math. It is constitutionally icky (legal term) to go after those people too harshly.
I think you and I have the same concerns about balancing damage to the societal fabric against protecting honest speech.
The answer is without legislation you are far more subject to whether a judge feels like changing the law.
What does being outed even mean anymore? It's just free advertising from all the outlets that feel they can derive revenue off your name being in their headlines. Nothing happens to them. SBF and Holmes being the notable exceptions, but that's because they stole from rich people.
What you recall also doesn't sound correct given right of publicity laws.
Celebrities need to get used to the fact that they will soon be no more important to a corporation than any other rank and file employee. AI is already able to conjure up whatever voices and personas we need at the ready and make the concept of actors all but a thing of the past.
Link and quote to the parts that back your claim and if it is not clear how it backs your claim say what your interpretation is.
[1] Just to head off people saying that such a use is not a copyright violation -- I'm not saying it is. I'm just saying that it's extremely sketchy and, in my view, ethically unsupportable.
There's a song about that - Jarvis Cocker - "Running the World"
Doesn't get played on radio because of the lyrics
You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: >>40424566
This is the paragraph that was intended to narrow what I was talking about:
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.
In tech we’re used to IP law. In entertainment, there is unsurprisingly a whole area of case law on image and likeness.
Tech will need to understand this—and the areas of domain specific case law in many, many other fields—if AI is really to be adopted by the entire world.
OpenAI cannot hurt her standing in the industry— in fact, “ScarJo takes on Big Tech and wins”, in an era after the Hollywood unions called a strike and won protections from studios using generative AI for exactly this scenario, is ironically probably one of the best thing she can do for her image right now.
She is also one of the most litigious actresses in the industry, taking on Disney and winning what’s estimated to be 8 figures.
Good luck OpenAI!
Public figures own their likeness and control its use. Not to mention that in this case OA is playing chicken with studios as well. Not a great time to do so, given their stated hopes of supplanting 99% of existing Hollywood creatives.
If there is a middle ground I'd like to think it involves not smashing through regulations that protect individual businesspeople (like cabbies) who have learned a difficult job.
I don't think the cookies thing is a good example. That's passive incompetence, to avoid the work of changing their business models. Altman actively does more work to erode people's rights.
> It's still bad, don't get be wrong, it's just something I can distinguish.
Can you? Plausible deniability is one of the first things in any malicious actor's playbook. "I meant well…" If there's no way to know, then you can only assess the pattern of behavior.
But realistically, nobody sapient accidentally spends multiple years building elaborate systems for laundering other people's IP, privacy, and likeness, and accidentally continues when they are made aware of the harms and explicitly asked multiple times to stop…
Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?
Sam Pullara @sampullara
If you have the ChatGPT app, set it to the Sky voice and talk to it. It is definitely a clone of ScarJo from Her.
6:21 PM · Dec 13, 2023
https://x.com/sampullara/status/1735122897663094853So scammers see other scammers, and they just think there's nothing wrong with it.
While normal people who act in good faith see scammers, and instinctively think that there must be a good reason for it, even (or especially!) if it looks sketchy.
I think this happens a lot. Not just with Altman, though that is a prominent currently ongoing example.
Protecting yourself from dark triad type personalities means you need to be able to understand a worldview and system of values and axioms that is completely different from yours, which is… difficult. …There's always that impulse to assume good faith and rationalize the behavior based on your own values.
Like many people who try to oppose psychopaths though, they don't seem to be around much anymore.
But I experience lots of ads with impersonators so maybe they just aren’t sued enough.
Also, Jack Nicholson never stopped Christian Slater from acting, so there must be some room for impersonation.
California Civil Code Section 3344(a) states:
Any person who knowingly uses another’s name, voice, signature, photograph, or likeness, in any manner, on or in products, merchandise, or goods, or for purposes of advertising or selling, or soliciting purchases of, products, merchandise, goods or services, without such person’s prior consent, or, in the case of a minor, the prior consent of his parent or legal guardian, shall be liable for any damages sustained by the person or persons injured as a result thereof.
In my mind these are close to being equally shitty, but not asking is a shittier because the victim won't necessarily know they've been exploited, which limits the actions they will be able to take to rectify matters.
I will slightly rewrite the quoted bit to reflect reality as is currently known by the public: "otherwise we assert without evidence they used copyrighted audio". I'm fully of the opinion that everyone on both sides is wrong here, and I'm only right because I hold the most minimal opinion; that everyone needs to go outside and touch grass because this whole thing is bizarre and pointless.
Writing this comment mostly to say - damn, I didn't think about it this way, but I guess "either believe yourselves to be exceedingly clever or everyone else has the intelligence of toddler" is indeed the mindset.
The only other alternative I can think of is "we all know it's BS, but do they have more money than us to spend on lawyers to call it out?" - which isn't much better TBH.
It’s so refreshing to hear someone else actually understand this sentiment for what it is— snake oil sales. The normies out there eat this up, and there is no convincing them otherwise because of how powerful the AI trope in entertainment media is.
[0]: https://www.reddit.com/r/ChatGPT/comments/1cwy6wz/vocal_comp...
I thought this when he didn't launch Worldcoin in the US but Africa, and consistently upped the ante to the point where he was offering people in the poorer parts of the continent amounts that equalled two months wages or more to scan their retinas.
Why was that necessary? It wasn't to share the VC windfall.
…Unless, of course, all this scandal isn't also a part of marketing campaign.
There is value in OpenAI, there are (a steadily shrinking number of) ethical pros there guilty of nothing worse than wanting to make a good living. Groups including but not limited to the voice group have done excellent, socially positive work.
But the leadership team is a menace (as I’ve been saying for years) and it’s just time for a clean sweep at the senior leadership level.
I’ve been such a vocal critic for so long that I’m always looking for an opportunity to present a balanced view whenever there’s a reasonable doubt.
Wrong again, me.
https://www.theatlantic.com/technology/archive/2024/05/opena...
Additionally I think you're understating the similarity to the Midler v Ford case. It has a similar pattern where they first contacted the "original", then hired an impersonator. The song wasn't at issue, they had a license to that part.
I think this part of some of the court documents[0] is especially relevant. Hedwig was the sound alike hired.
> Hedwig was told by Young & Rubicam that "they wanted someone who could sound like Bette Midler's recording of [Do You Want To Dance]." She was asked to make a "demo" tape of the song if she was interested. She made an a capella demo and got the job.
> At the direction of Young & Rubicam, Hedwig then made a record for the commercial. The Midler record of "Do You Want To Dance" was first played to her. She was told to "sound as much as possible like the Bette Midler record," leaving out only a few "aahs" unsuitable for the commercial. Hedwig imitated Midler to the best of her ability.
> After the commercial was aired Midler was told by "a number of people" that it "sounded exactly" like her record of "Do You Want To Dance." Hedwig was told by "many personal friends" that they thought it was Midler singing the commercial. Ken Fritz, a personal manager in the entertainment business not associated with Midler, declares by affidavit that he heard the commercial on more than one occasion and thought Midler was doing the singing.
> Neither the name nor the picture of Midler was used in the commercial; Young & Rubicam had a license from the copyright holder to use the song. At issue in this case is only the protection of Midler's voice.
So the fact that us random internet commentors did not recognize her voice doesn't seem to matter in cases like these. It's enough that the sound alike had been told to mimic the original voice, and that people familiar with the voice be fooled.
[0] https://law.justia.com/cases/federal/appellate-courts/F2/849...
Not make a living posing for pictures without consent of the said celebrity?
Re: "get real" - the law is pretty real.
Things like parody are protected under fair use, explicitly.
Given that connection, I think it's plausible. They have a similar voice and given Bari's experience in podcasting could be a sensible choice if OpenAI wanted Scarlett but couldn't make it happen.
Yeah not moving from my position at all. Just a very generic featureless female voice. I suppose I hear some similarities in timbre, but it’s such an unremarkable voice and diction that it’s hard to put your finger on anything past “generic low affect American alto”.
It’s a great computer voice. Taking it down is for sure the right call PR wise, regardless of whether they may have done.
Bernie Madoff is another funny name we should throw in there.
Absolutely 0 respect for after-breakdown-and-addiction period, he is simply a different person, not interesting to me anymore, with political agenda and weird obsessions.
So what? Reducing a situation to its essentials is an entirely valid argument / debating technique.
To suggest that Johansson’s only appeal is to the opposite gender (and ‘lonely’ ones at that!) I think is myopic and reductive of her impact
hahaha! :)