zlacker

[return to "Statement from Scarlett Johansson on the OpenAI "Sky" voice"]
1. anon37+t5[view] [source] 2024-05-20 22:58:41
>>mjcl+(OP)
Well, that statement lays out a damning timeline:

- OpenAI approached Scarlett last fall, and she refused.

- Two days before the GPT-4o launch, they contacted her agent and asked that she reconsider. (Two days! This means they already had everything they needed to ship the product with Scarlett’s cloned voice.)

- Not receiving a response, OpenAI demos the product anyway, with Sam tweeting “her” in reference to Scarlett’s film.

- When Scarlett’s counsel asked for an explanation of how the “Sky” voice was created, OpenAI yanked the voice from their product line.

Perhaps Sam’s next tweet should read “red-handed”.

◧◩
2. nickth+R7[view] [source] 2024-05-20 23:10:38
>>anon37+t5
This statement from scarlet really changed my perspective. I use and loved the Sky voice and I did feel it sounded a little like her, but moreover it was the best of their voice offerings. I was mad when they removed it. But now I’m mad it was ever there to begin with. This timeline makes it clear that this wasn’t a coincidence and maybe not even a hiring of an impressionist (which is where things get a little more wishy washy for me).
◧◩◪
3. crimso+y9[view] [source] 2024-05-20 23:19:08
>>nickth+R7
But it's clearly not her voice right? The version that's been on the app for a year just isn't. Like, it clearly intending to be slightly reminiscent of her, but it's also very clearly not. Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?
◧◩◪◨
4. ncalla+jc[view] [source] 2024-05-20 23:38:23
>>crimso+y9
> Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?

They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.

It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.

Totally ethically bankrupt.

◧◩◪◨⬒
5. menset+Ul[view] [source] 2024-05-21 00:36:30
>>ncalla+jc
Effective altruism would posit that it is worth one voice theft to help speed the rate of life saving ai technology in the hands of everyone.
◧◩◪◨⬒⬓
6. ncalla+Mv[view] [source] 2024-05-21 01:53:21
>>menset+Ul
Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

Their hubris will walk them right into federal prison for fraud if they’re not careful.

If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

◧◩◪◨⬒⬓⬔
7. parine+jB[view] [source] 2024-05-21 02:52:24
>>ncalla+Mv
This is like attributing the crimes of a few fundamentalists to an entire religion.
◧◩◪◨⬒⬓⬔⧯
8. ncalla+1C[view] [source] 2024-05-21 02:59:14
>>parine+jB
I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.

I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.

So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.

[go to top]