They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.
It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.
Totally ethically bankrupt.
I think so but that could just be me.
Edit: to clarify, since it is not exactly identical voice, or even not that close, they can plausibly deny it, and we never new what their intention was.
But in this case, they have clearly created the voice to represent Scarlett's voice to demonstrate the capabilities of their product in order to get marketing power.
I'm guessing if any of the Harry Potter actors threatened the hobbyist with legal action the video would likely come down, though I doubt they would bother even if they didn't care for the video.
You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her. That doesn't mean they thought they had to ask permission for the similar voice assistant.
That is what matters. OWNERSHIP over her contributions to the world.
This reads like “we got caught red handed” and doing the bare minimum for it to not appear malicious and deliberate when the timeline is read out in court.
There is a major difference between parodying someone by imitating them while clearly and almost explicitly being an imitation; and deceptively imitating someone to suggest they are associated with your product in a serious manner.
I suspect a video avatar service that looked exactly like her would fall afoul of fair use as well. Though an image gen that used some images of her (and many others) to train and spit out generic "attractive blonde woman" is fair use in my opinion.
But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.
I mean, why not actually compare the voices before forming an opinion?
https://www.youtube.com/watch?v=SamGnUqaOfU
https://www.youtube.com/watch?v=vgYi3Wr7v_g
-----
Answer: because they knew they needed permission, after working so hard to associate with Her, and they hoped that in traditional tech fashion that if they moved fast and broke things enough, everyone would have to reshape around OAs wants, rather than around the preexisting rights of the humans involved.
If they really hired someone who sounds just like her it's fair game IMO. Johanssen can't own the right to a similar voice just like many people can have the same name. I think if there really was another actress and she just happens to sound like her, then it's really ok. And no I'm not a fan of Altman (especially his worldcoin which I view as a privacy disaster)
I mean, imagine if I happened to have a similar voice to a famous actor, would that mean that I couldn't work as a voice actor without getting their OK just because they happen to be more famous? That would be ridiculous. Pretending to be them would be wrong, yes.
If they hired someone to change their voice to match hers, that'd be bad. Yeah. If they actually just AI-cloned her voice that's totally not OK. Also any references to the movies. Bad.
They likely have a legal position which is defensible.
They're much more worried that they don't have a PR position which is defensible.
What's the point of winning the (legal) battle if you lose the war (of public opinion)?
Given the rest of their product is built on apathy to copyright, they're actively being sued by creators, and the general public is sympathetic to GenAI taking human jobs...
... this isn't a great moment for OpenAI to initiate a long legal battle, against a female movie actress / celebrity, in which they're arguing how her likeness isn't actually controlled by her.
Talk about optics!
(And I'd expect they quietly care much more about their continued ability to push creative output through their copyright launderer, than get into a battle over likeness)
If someone licenses an impersonator's voice and it gets very close to the real thing, that feels like an impossible situation for a court to settle and it should probably just be legal (if repugnant).
Whether you think it sounds like her or not is a matter of opinion, I guess. I can see the resemblance, and I can also see the resemblance to Jennifer Lawrence and others.
What Johannson is alleging goes beyond this, though. She is alleging that Altman (or his team) reached out to her (or her team) to lend her voice, she was not interested, and then she was asked again just two days before GPT-4o's announcement, and she rejected again. Now there's a voice that, in her opinion, sounds a lot like her.
Luckily, the legal system is far more nuanced than just listening to a few voices and comparing it mentally to other voices individuals have heard over the years. They'll be able to figure out, as part of discovery, what lead to the Sky voice sounding the way it does (intentionally using Johannson's likeness? coincidence? directly trained off her interviews/movies?), whether OpenAI were willing to slap Johannson's name onto the existing Sky during the presentation, whether the "her" tweet and the combination of the Sky voice was supposed to draw the subtle connection... This allegation is just the beginning.
No one is harmed.
I wonder if they deliberately steered towards this for more marketing buzz?
This is a civil issue, and actors get broad rights to their likeliness. Kim Kardashian sued Old Navy for using a look-alike actress in an ad; old Navy chose to settle, which makes it appear like "the real actress wasn't involved in any way" may not be a perfect defense. The timeline makes it clear they wanted it to sound like Scarlett's voice, the actual mechanics on how they got the AI to sound like that is only part of the story.
It’s shocking to me how people cannot see this.
The only surprise here is that they didn’t think she’d push back. That is what completes the multilayered cosmic and dramatic irony of this whole vignette. Honestly feels like Shakespeare or Arthur Miller might have written it.
Asking for her vocal likeness is completely in line with just wanting the association with "Her" and the big PR hit that would come along with that. They developed voice models on two different occasions and hoped twice that Johannson would allow them to make that connection. Neither time did she accept, and neither time did they release a model that sounded like her. The two day run-up isn't suspicious either, because we're talking about a general audio2audio transformer here. They could likely fine-tune it (if even that is necessary) on her voice in hours.
I don't think we're going to see this going to court. OpenAI simply has nothing to gain by fighting it. It would likely sour their relation to a bunch of media big-wigs and cause them bad press for years to come. Why bother when they can simply disable Sky until the new voice mode releases, allowing them to generate a million variations of highly-expressive female voices?
That claim could very well be true. The letter requested information on how the voice was trained - OpenAI may not want that can of worms opened lest other celebrities start paying closer attention to the other voices.
Does that mean if cosplayers dress up like some other character, they can use that version of the character in their games/media? I think it should be equally simple to settle. It's different if it's their natural voice. Even then, it brings into question whether they can use "doppelgangers" legally.
It’s not like Tom Waits ever wanted to hock chips
https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...
If someone clones a random person's voice for commercial purposes, the public likely has no idea who the voice's identity is. Consequently, it's just the acoustic voice.
If someone clones a famous media celebrity's voice, the public has a much greater chance of recognizing the voice and associating it with a specific person.
Which then opens a different question of 'Is the commercial use of the voice appropriating the real person's fame for their own gain?'
Add in the facts that media celebrities' values are partially defined by how people see them, and that they are often paid for their endorsements, and it's a much clearer case that (a) the use potentially influenced the value of their public image & (b) the use was theft, because it was taking something which otherwise would have had value.
Neither consideration exists with 'random person's voice' (with deference to voice actors).
* Defined as 'someone for whom there is an expectation that the general public would recognize their voice or image'
Their hubris will walk them right into federal prison for fraud if they’re not careful.
If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
That actually seems like there may be a few people involved and one of them is a cowboy PM who said fuck it, ship it to make the demo. And then damage control came in later. Possibly the PM didn't even know about the asks for permission?
but did openAI make any claims about whose voice this is? Just because a voice sounds similar or familiar, doesn't mean it's fraudulent.
> - Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.
And promoted it using a tweet naming the movie that Johansson performed in, for the role that prompted them to ask her in the first place.
You have to be almost deliberately naive to not see that the were attempting to use her vocal likeness in this situation. There’s a reason they immediately walked it back after the situation was revealed.
Neither a judge, nor a jury, would be so willingly naive.
If the voice was only trained on the voice of the character she played in Her, would she have any standing in claiming some kind of infringement?
The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.
[0] https://www.businessinsider.com/sam-altman-openai-artificial...
It's a "I know it when I see it" situation so it's not clear cut.
Maybe (maybe!) it’s worth it for someone like Johansson to take on the cost of that to vindicate her rights—but it’s certainly not the case for most people.
If your rights can only be defended from massive corporations by bringing lawsuits that cost hundreds of thousands to millions of dollars, then only the wealthy will have those rights.
So maybe she wants new legislative frameworks around these kind of issues to allow people to realistically enforce these rights that nominally exist.
For an example of updating a legislative framework to allow more easily vindicating existing rights, look up “anti-SLAPP legislation”, which many states have passed to make it easier for a defendant of a meritless lawsuit seeking to chill speech to have the lawsuit dismissed. Anti-SLAPP legislation does almost nothing to change the actual rights that a defendant has to speak, but it makes it much more practical for a defendant to actually excercise those rights.
So, the assumption that a call for updated legislation implies that no legal protection currently exists is just a bad assumption that does not apply in this situation.
Doesn't sound like they have that either.
Like some intern’s idea to train the voice on their favorite movie.
And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
This could be a well-planned opening move of a regulation gambit. But unlikely.
Many things that are legal are of questionable ethics. Asking permission could easily just be an effort for them to get better samples of her voice. Pulling the voice after debuting it is 100% a PR response. If there's a law that was broken, pulling the voice doesn't unbreak it.
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.
I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.
So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.
If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]
Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.
The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.
But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.
Her voice alone didn’t get her there — she did. That’s why celebrities are so protective about how their likeness is used: their personal brand is their asset.
There’s established legal precedent on exactly this—even in the case they didn’t train on her likeness, if it can reasonably be suspected by an unknowing observer that she personally has lent her voice to this, she has a strong case. Even OpenAI knew this, or they would not have asked in the first place.
Conman plain and simple.
When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?
Maybe if they replicated a scene from the A's movies or there was striking likeness between the voices... but not generally.
If a PM there didn’t say “fuck it ship it even without her permission” they’d probably be replaced with someone who would.
I expect the cost of any potential legal action/settlement was happily accepted in order to put on an impressive announcement.
I think the copyright industry wants to grab new powers to counter the infinite capacity of AI to create variations. But that move would knee cap the creative industry first, newcomers have no place in a fully copyrighted space.
It reminds me of how NIMBY blocks construction to keep up the prices. Will all copyright space become operated on NIMBY logic?
Because then the actual case would be fairly bizarre: an entirely separate person, selling the rights to their own likeness as they are entitled to do, is being prohibited from doing that by the courts because they sound too much like an already famous person.
EDIT: Also up front I'm not sure you can entirely discuss timelines for changing out technology here. We have voice cloning systems that can do it with as little as 15 seconds of audio. So having a demo reel of what they wanted to do that they could've used on a few days notice isn't unrealistic - and training a model and not using it or releasing it also isn't illegal.
Buckle in, go to court, and double-down on the fact that the public's opinion of actors is pretty damn fickle at the best of times - particularly if what you released was in fact based on someone you signed a valid contract with who just sounds similar.
Of course, this is all dependent on actually having a complete defense of course - you absolutely would not want to find Scarlett Johannsen voice samples in file folders associated with the Sky model if it went to court.
They literally hired an impersonator, and it cost them 2.5 million (~6 million today).
https://www.latimes.com/archives/la-xpm-1990-05-09-me-238-st...
Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.
I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.
People who hate Hollywood? Most of that crowd hates tech even more.
* Because it would take the first news cycle to be branded as that
> The new board should act
You mean like the last board tried? Besides the board was picked to be on Altman’s side. The independent members were forced out.
I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.
I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.
The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.
Imo Sky's voice is distinct enough from Scarlett, and it wasn't implied to _be_ her.
Sam's "Her" tweet could be interpreted as such, but defending the tweet as the concept of "Her", rather than the voice itself, is.
From elsewhere in the thread, likeness rights apparently do extend to intentionally using lookalikes / soundalikes to create the appearance of endorsement or association.
Given the timeline it sounds like the PM was told "just go ahead with it, I'll get the permission".
I’d wager that most senior+ engineers or product people also have equally compelling “the vision”s.
The difference is that they need to do actual work all day so they don’t get to sit around pontificating.
Look kind of similar right? Lot of familiar styling queues? What would take it from "similar" to actual infringement? Well if you slapped an Apple Logo on there, that would do it. Did OpenAI make an actual claim? Did they actually use Scarlett Johannson's public image and voice as sampling for the system?
[1] https://images.prismic.io/frameworkmarketplace/25c9a15f-4374...
[2] https://i.dell.com/is/image/DellContent/content/dam/ss2/prod...
[3] https://cdn.arstechnica.net/wp-content/uploads/2023/06/IMG_1...
There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:
https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...
>If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI
Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.
>When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...
Ah, the famous rogue engineer.
The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.
> And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.
You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.
> This could be a well-planned opening move of a regulation gambit. But unlikely.
LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.
i dont see why he should be in jail
E.g. flying Congress to Lake Cuomo for an off-the-record “discussion” https://freebeacon.com/politics/how-the-aspen-institute-help...
I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.
In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.
> Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.
So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.
I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.
https://www.opensecrets.org/federal-lobbying/clients/summary...
Maybe I liked it best because it felt familiar, even if I didn’t know why. I’m a bit disappointed now that she didn’t sign on officially, but my guess is that Altman just burned his bridge to half of Hollywood if he is looking for a plan B.
It's a Musk-error not an SBF-error. (Of course, I do realise many will say all three are the same, but I think it's worth separating the types of mistakes everyone makes, because everyone makes mistakes, and only two of these three also did useful things).
Sufficiently advanced incompetence is indistinguishable from malice.
Worldcoin is centrally controlled making it a classic "scam coin". Decentralization is the _only_ unique thing about cryptocurrencies, when you abandon decentralization all that's left is general scamminess.
(Yes, there's nuance to decentralization too but that's not what's going on with Worldcoin.)
Which is what this would be in the not-stupid version of events: they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".
The issue of course is that OpenAIs reactions so far don't seem to indicate that they're actually confident they can prove this or that this is the case. Coz if this is actually the case, they're going about handling this in the dumbest possible way.
Please point to a case where someone was successfully sued for sounding too much like a celebrity (while not using the celebrity's name or claiming to be them).
The public hardly heard from or saw the mgmt of these firm in media until shit hit the fan.
Today it feels like managment is in the media every 3 hours trying to capture attention of prospective customers, investors, employees etc or they loose out to whoever is out there capturing more attention.
So false and condradictory signalling is easy to see. Hopefully out of all this chaos we get a better class of leaders not a better class of panderers.
The biggest problem on that front (assuming the former is not true) is Altman's tweets, but court-wise that's defensible (though I retract what I had here previously - probably not easily) as a reference to the general concept of the movie.
Because otherwise the situation you have is OpenAI seeking a particular style, hiring someone who can provide it, not trying to pass it off as that person (give or take the Tweet's) and the intended result effectively being: "random voice actress, you sound too much like an already rich and famous person. Good luck having no more work in your profession" - which would be the actual outcome.
The question entirely hinges on, did they include any data at all which includes ScarJo's voice samples in the training. And also whether it actually does sound similar enough - Frito-Lay went down because of intent and similarity. There's the hilarious outcome here that the act of trying to contact ScarJo is the actual problem they had.
EDIT 2: Of note also - to have a case, they actually have to show reputational harm. Of course on that front, the entire problem might also be Altman. Continuing the trend I suppose of billionaires not shutting up on Twitter being the main source of their legal issues.
Frito-Lay copied a song by Waits (with different lyrics) and had an impersonator sing it. Witnesses testified they thought Waits had sung the song.
If OpenAI were to anonymously copy someone's voice by training AI on an imitation, you wouldn't have:
- a recognizable singing voice
- music identified with a singer
- market confusion about whose voice it is (since it's novel audio coming from a machine)
I don't think any of this is ethical and think voice-cloning should be entirely illegal, but I also don't think we have good precedents for most AI issues.
The scenario would have been that they approach none.
I don't think the issue is that Vision doesn't matter. I think the issue is Sam doesn't have it. Like Gates and Jobs had clear, well defined visions for how the PC was going to change the world, then rallied engineering talent around them and turned those into reality, that's how their billions and those lasting empires were born. Maybe someone like Elon Musk is a contemporary example. Just don't see anything like that from SamA, we see him in the media, talking a lot about AI, rubbing shoulders with power brokers, being cutthroat, but where's the vision of a better future? And if he comes up with one does he really understand the engineering well enough to ground it in reality?
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
You realise that there are multiple employees including the CEO publicly drawing direct comparisons to the movie Her after having tried and failed twice to hire the actress who starred in the movie? There is no non idiotic reading of this.
Take your point about LLMs though.
However, GP practices are essentially privatised - so you do have the right to register at another practice.
Whelp. Let us see if this one sticks.
That's what I'm discussing.
Edit: which is to say, I think Sam Altman may have been a god damn idiot about this, but it's also wild anyone thought that ScarJo or anyone in Hollywood would agree - AI is currently the hot button issue there and you'd find yourself the much more local target of their ire.
Which one?
on edit: this being based on American legal system, you may come from a legal system with different rules.
Any criticism of AI is being met with "but if we all just hype AI harder, it will get so good that your criticisms won't matter" or flat out denied. You've got tech that's deeply flawed with no obvious way to get unflawed, and the current AI 'leaders' run companies with no clear way to turn a profit other than being relentlessly hyped on proposed future growth.
It's becoming an extremely apparent bubble.
It's still bad, don't get be wrong, it's just something I can distinguish.
Why be cartoonishly stupid and cartoonishly arsehole and steal a celebrity’s voice? Did he think Scarlett won’t find out? Or object?
I don’t understand these rich people. Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos
It's a thing you put on your phone
I don't have a phone
Well, we can't register you
You don't accept people who don't have phones? Could I have that in writing please, ..., oh, your signature on that please ...
Considering the movie's 11 years old, it's surprisingly on-point with depictions of AI/human interactions, relations, and societal acceptance. It does get a bit speculative and imaginative at the end though...
But I imagine that movie did/does spark the imagination of many people, and I guess Sam just couldn't let it go.
Correcting, the thing about this whole situation with OpenAI is they are willing to steal everything for use in ChatGPT. They trained their model with copyrighted data and for some reason they won't delete the millions of protected data they used to train the AI model.
Decentralisation allows trust-less assurance that money is sent, it's just that's not useful because the goods or services for which the money is transferred still need either trust or a centralised system that can undo the transaction because fraud happened.
That's where smart contracts come in, which I also think are a terrible idea, but do at least deserve a "you tried!" badge, because they're as dumb as saying "I will write bug-free code" rather than as dumb as "let's build a Dyson swarm to mine exactly the same amount of cryptocurrency as we would have if we did nothing".
Company identifies celebrity voice they want. (Frito=Waits, OpenAi=ScarJo)
Company comes up with novel thing for the the voice to say. (Frito=Song, OpenAI=ChatGpt)
Company decides they don’t need the celebrity they want (Frito=Waits, OpenAI=ScarJo) and instead hire an impersonator (Frito=singer, {OpenAI=impersonator or OpenAI=ScarJo-public-recordings}) to get what they want (Frito=a-facsimile-of-Tom-Waitte’s-voice-in-a-commercial, OpenAi=a-fascimilie-of-ScarJo’s-voice-in-their-chatbot)
When made public, people confuse the fascimilie as the real thing.
I don’t see how you don’t see a parallel. It’s literally best for beat the same, particularly around the part about using an impersonator as an excuse.
> Cool story bro.
> Except I could never have predicted the part where you resigned on the spot :)
> Other than that, child's play for me.
>Thanks for the help. I mean, thanks for your service as CEO.
Who is the underdog in this situation? In your comment it seems like you're framing OpenAI as the underdog (or perceived underdog) which is just bonkers.
Hacker News isn't a hivemind and there are those of us who work in GenAI who are firmly on the side of the creatives and gasp even rights holders.
There are quite a few issues here: First, this is assuming they actually hired a voice-alike person, which is not confirmed. Second, they are not an underdog (the voice actress might be, but she's most likely pretty unaffected by this drama). Finally, they were clearly aiming to impersonate ScarJo (as confirmed by them asking for permission and samas tweet), so this is quite a different issue than "accidentally" hiring someone that "just happens to" sound like ScarJo.
It's funny that just seven days ago I was speculating that they deliberately picked someone whose voice is very close to Scarlett's and was told right here on HN, by someone who works in AI, that the Sky voice doesn't sound anything like Scarlett and it is just a generic female voice:
https://news.ycombinator.com/item?id=40343950#40345807
Apparently .... not.
That is indeed something it does.
But it also gives you the assurance that a single entity can't print unlimited money out of thin air, which is the case with a centrally controlled currency like Worldcoin.
They can just shrug their shoulders and claim that all that money is for the poor and gullible Africans that had their eyeballs scanned.
They seem to love "testing" how much they can bully someone.
I remember a few experiences where someone responded by being an even bigger dick, and they disappeared fast.
Sure, but the inability to do that when needed is also a bad thing.
Also, single world currencies are (currently) a bad thing, because when your bit of the world needs to devalue its currency is generally different to when mine needs to do that.
But this is why economics is its own specialty and not something that software nerds should jump into like our example with numbers counts for much :D
They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.
At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.
He lies and steals much more than that. He’s the scammer behind Worldcoin.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
> Altman is, and wants to be, a large part of AI regulation. Quite the public contradiction.
That’s as much of a contradiction as a thief wanting to be a large part of lock regulation. What better way to ensure your sleazy plans benefit you, and preferably only you but not the competition, than being an active participant in the inevitable regulation while it’s being written?
It’s both.
This isn’t even close to the most unethical thing he has done. This is peanuts compared to the Worldcoin scam.
Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.
Now, the NSA and crypto standards, that would have worked as a metaphor for your point.
(I don't think it's correct, but that's an independent claim, and I am not only willing to discover that I'm wrong about their sincerity, I think everyone writing that legislation should actively assume the worst while they do so).
The Lockpicking Lawyer is not a thief, so I don’t get your desire to incorrectly nitpick. Especially when you clearly understood the point.
> Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.
A thief is not a lock picker and they don't have the same incentive. A thief in a position to dictate lock regulation would try to have a legal backdoor on every lock in the world. One that only he has the master key for. Something something NSA & cryptography :)
She doesn't have to own anything to claim this right, if the value of her voice is recognizable.
"A is demonstrating a proof of B" does not require "A is a clause in B".
A being TLPL, B being that the entire lock industry is bad, so bad that anyone with experience would be a massive improvement, for example a thief.
If you've watched his videos then surely you should know that lockpicking isn't even on the radar for thieves as there are much easier and faster methods such as breaking the door or breaking a window.
Other people have commented to further explain the point in other words. I recommend you read those, perhaps it’ll make you understand.
When and why would BTC or ETH need to print unlimited money and devalue themselves?
And the answer to that is all the reasons governments do just that, except for the times where the government is being particularly stupid and doing hyperinflation.
> Something something NSA & cryptography :)
Indeed, as I said :)
Did you?
> Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.
Jobs responds minutes later... "Fuck the lawyers."
One very easy explanation is that they trained Sky using another voice (this is the claim and no reason to doubt it is true) wanting to replicate the stye of the voice in "Her", but would have preferred to use SJ's real voice for the PR impact that could have.
Yanking it could also easily be a pre-emptive response to avoid further PR drama.
You will obvious decide you don't believe those explanations, but to many of us they're quite plausible, in fact I'd even suggest likely.
(And none of this precludes Sam Altman and OpenAI being dodgy anyway)
My view is, of course it is ok. SJ doesn't own the right to a particular style of voice.
I thought about your comment for a while, and I agree that there is a fine line between "realistic parody" and "intentional deception" that makes deepfake AI almost impossible to defend. In particular I agree with your distinction:
- In matters involving human actors, human-created animations, etc, there should be great deference to the human impersonators, particularly when it involves notable public figures. One major difference is that, since it's virtually impossible for humans to precisely impersonate or draw one another, there is an element of caricature and artistic choice with highly "realistic" impersonations.
- AI should be held to a higher standard because it involves almost no human expression, and it can easily create mathematically-perfect impersonations which are engineered to fool people. The point of my comment is that fair use is a thin sliver of what you can do with the tech, but it shouldn't be stamped out entirely.
I am really thinking of, say, the Joe Rogan / Donald Trump comedic deepfakes. It might be fine under American constitutional law to say that those things must be made so that AI Rogan / AI Trump always refer to each other in those ways, to make it very clear to listeners. It is a distinctly non-libertarian solution, but it could be "necessary and proper" because of the threat to our social and political knowledge. But as a general principle, those comedic deepkfakes are works of human political expression, aided by a fairly simple computer program that any CS graduate can understand, assuming they earned their degree honestly and are willing to do some math. It is constitutionally icky (legal term) to go after those people too harshly.
I think you and I have the same concerns about balancing damage to the societal fabric against protecting honest speech.
The answer is without legislation you are far more subject to whether a judge feels like changing the law.
What does being outed even mean anymore? It's just free advertising from all the outlets that feel they can derive revenue off your name being in their headlines. Nothing happens to them. SBF and Holmes being the notable exceptions, but that's because they stole from rich people.
[1] Just to head off people saying that such a use is not a copyright violation -- I'm not saying it is. I'm just saying that it's extremely sketchy and, in my view, ethically unsupportable.
You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: >>40424566
This is the paragraph that was intended to narrow what I was talking about:
> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.
That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.
Public figures own their likeness and control its use. Not to mention that in this case OA is playing chicken with studios as well. Not a great time to do so, given their stated hopes of supplanting 99% of existing Hollywood creatives.
I don't think the cookies thing is a good example. That's passive incompetence, to avoid the work of changing their business models. Altman actively does more work to erode people's rights.
> It's still bad, don't get be wrong, it's just something I can distinguish.
Can you? Plausible deniability is one of the first things in any malicious actor's playbook. "I meant well…" If there's no way to know, then you can only assess the pattern of behavior.
But realistically, nobody sapient accidentally spends multiple years building elaborate systems for laundering other people's IP, privacy, and likeness, and accidentally continues when they are made aware of the harms and explicitly asked multiple times to stop…
Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?
So scammers see other scammers, and they just think there's nothing wrong with it.
While normal people who act in good faith see scammers, and instinctively think that there must be a good reason for it, even (or especially!) if it looks sketchy.
I think this happens a lot. Not just with Altman, though that is a prominent currently ongoing example.
Protecting yourself from dark triad type personalities means you need to be able to understand a worldview and system of values and axioms that is completely different from yours, which is… difficult. …There's always that impulse to assume good faith and rationalize the behavior based on your own values.
Like many people who try to oppose psychopaths though, they don't seem to be around much anymore.
In my mind these are close to being equally shitty, but not asking is a shittier because the victim won't necessarily know they've been exploited, which limits the actions they will be able to take to rectify matters.
I thought this when he didn't launch Worldcoin in the US but Africa, and consistently upped the ante to the point where he was offering people in the poorer parts of the continent amounts that equalled two months wages or more to scan their retinas.
Why was that necessary? It wasn't to share the VC windfall.
Bernie Madoff is another funny name we should throw in there.