zlacker

[parent] [thread] 60 comments
1. ncalla+(OP)[view] [source] 2024-05-20 23:38:23
> Are we seriously saying we can't make voices that are similar to celebrities, when not using their actual voice?

They clearly thought it was close enough that they asked for permission, twice. And got two no’s. Going forward with it at that point was super fucked up.

It’s very bad to not ask permission when you should. It’s far worse to ask for permission and then ignore the response.

Totally ethically bankrupt.

replies(5): >>nicce+j1 >>avarun+Q3 >>ants_e+P7 >>menset+B9 >>gibbit+Ee
2. nicce+j1[view] [source] 2024-05-20 23:45:43
>>ncalla+(OP)
And they could have totally get away with it by never mentioning the name of Scarlett. But of course, that is not what they wanted.

Edit: to clarify, since it is not exactly identical voice, or even not that close, they can plausibly deny it, and we never new what their intention was.

But in this case, they have clearly created the voice to represent Scarlett's voice to demonstrate the capabilities of their product in order to get marketing power.

replies(1): >>visarg+wt
3. avarun+Q3[view] [source] 2024-05-20 23:59:32
>>ncalla+(OP)
> They clearly thought it was close enough that they asked for permission, twice.

You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her. That doesn't mean they thought they had to ask permission for the similar voice assistant.

replies(4): >>tomrod+k7 >>chroma+08 >>voltai+38 >>ncalla+6n
◧◩
4. tomrod+k7[view] [source] [discussion] 2024-05-21 00:23:31
>>avarun+Q3
And... No. That is what OpenAI will assert, and good discovery by Scar Jo reps may prove or disprove.
5. ants_e+P7[view] [source] 2024-05-21 00:26:14
>>ncalla+(OP)
Yes, totally ethically bankrupt. But what bewilders me is that they yanked it as soon as they heard from their lawyers. I would have thought that if they made the decision to go ahead despite getting two "no"s, that they at least had a legal position they thought was defensible and worth defending.

But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.

replies(3): >>ethbr1+tc >>foobar+9k >>emsign+dA
◧◩
6. chroma+08[view] [source] [discussion] 2024-05-21 00:27:05
>>avarun+Q3
So, what would they have done if she accepted? Claimed that the existing training of the Sky voice was voiced by her?
replies(4): >>famous+Fa >>sangno+Fh >>blacko+os >>avarun+Ws3
◧◩
7. voltai+38[view] [source] [discussion] 2024-05-21 00:27:20
>>avarun+Q3
You seem to be misunderstanding the legalities at work here: reaching out to her multiple times beforehand, along with tweets intended to underline the similarity to her work on Her, demonstrates intention. If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

Answer: because they knew they needed permission, after working so hard to associate with Her, and they hoped that in traditional tech fashion that if they moved fast and broke things enough, everyone would have to reshape around OAs wants, rather than around the preexisting rights of the humans involved.

replies(3): >>KHRZ+Td >>parine+wp >>munksb+pV1
8. menset+B9[view] [source] 2024-05-21 00:36:30
>>ncalla+(OP)
Effective altruism would posit that it is worth one voice theft to help speed the rate of life saving ai technology in the hands of everyone.
replies(2): >>ehnto+Xh >>ncalla+tj
◧◩◪
9. famous+Fa[view] [source] [discussion] 2024-05-21 00:43:40
>>chroma+08
Voice cloning could be as simple as a few seconds of audio in the context window since GPT-4o is a speech to speech transformer. They wouldn't need to claim anything, just switch samples. They haven't launched the new voice mode yet, just demos.
◧◩
10. ethbr1+tc[view] [source] [discussion] 2024-05-21 00:54:16
>>ants_e+P7
> I would have thought that if they made the decision to go ahead despite getting two "no"s, that they at least had a legal position they thought was defensible and worth defending.

They likely have a legal position which is defensible.

They're much more worried that they don't have a PR position which is defensible.

What's the point of winning the (legal) battle if you lose the war (of public opinion)?

Given the rest of their product is built on apathy to copyright, they're actively being sued by creators, and the general public is sympathetic to GenAI taking human jobs...

... this isn't a great moment for OpenAI to initiate a long legal battle, against a female movie actress / celebrity, in which they're arguing how her likeness isn't actually controlled by her.

Talk about optics!

(And I'd expect they quietly care much more about their continued ability to push creative output through their copyright launderer, than get into a battle over likeness)

replies(2): >>justin+to >>XorNot+aw
◧◩◪
11. KHRZ+Td[view] [source] [discussion] 2024-05-21 01:05:12
>>voltai+38
You could also ask: If Scarlett has a legal case already, why does she want legislation passed?
replies(4): >>minima+ue >>ncalla+lo >>bradch+qs >>static+sa2
◧◩◪◨
12. minima+ue[view] [source] [discussion] 2024-05-21 01:09:50
>>KHRZ+Td
To prevent it from happening again, with more legal authority than a legal precedent.
13. gibbit+Ee[view] [source] 2024-05-21 01:10:54
>>ncalla+(OP)
Are we surprised by this bankruptcy. As neat as AI is, it is only a thing because the corporate class see it as a way to reduce margins by replacing people with it. The whole concept is bankrupt.
replies(4): >>ecjhdn+Ug >>ncalla+Dh >>emsign+SA >>Nasrud+BO1
◧◩
14. ecjhdn+Ug[view] [source] [discussion] 2024-05-21 01:31:55
>>gibbit+Ee
100% this.

It’s shocking to me how people cannot see this.

The only surprise here is that they didn’t think she’d push back. That is what completes the multilayered cosmic and dramatic irony of this whole vignette. Honestly feels like Shakespeare or Arthur Miller might have written it.

◧◩
15. ncalla+Dh[view] [source] [discussion] 2024-05-21 01:37:50
>>gibbit+Ee
I don’t think any said anything about being surprised by it?
◧◩◪
16. sangno+Fh[view] [source] [discussion] 2024-05-21 01:38:20
>>chroma+08
> Claimed that the existing training of the Sky voice was voiced by her?

That claim could very well be true. The letter requested information on how the voice was trained - OpenAI may not want that can of worms opened lest other celebrities start paying closer attention to the other voices.

◧◩
17. ehnto+Xh[view] [source] [discussion] 2024-05-21 01:40:38
>>menset+B9
It didn't require voice theft, they could have easily found a volunteer or paid for someone else.
◧◩
18. ncalla+tj[view] [source] [discussion] 2024-05-21 01:53:21
>>menset+B9
Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

Their hubris will walk them right into federal prison for fraud if they’re not careful.

If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

replies(5): >>parine+0p >>comp_t+ty >>0xDEAF+JC >>Intral+Tv2 >>Intral+1y2
◧◩
19. foobar+9k[view] [source] [discussion] 2024-05-21 01:59:26
>>ants_e+P7
> But it kind of looks like they released it knowing they couldn't defend it in court which must seem pretty bonkers to investors.

That actually seems like there may be a few people involved and one of them is a cowboy PM who said fuck it, ship it to make the demo. And then damage control came in later. Possibly the PM didn't even know about the asks for permission?

replies(2): >>anytim+Bu >>kubobl+QB
◧◩
20. ncalla+6n[view] [source] [discussion] 2024-05-21 02:32:02
>>avarun+Q3
> You seem to be misunderstanding the situation here. They wanted ScarJo to voice their voice assistant, and she refused twice. They also independently created a voice assistant which sounds very similar to her.

And promoted it using a tweet naming the movie that Johansson performed in, for the role that prompted them to ask her in the first place.

You have to be almost deliberately naive to not see that the were attempting to use her vocal likeness in this situation. There’s a reason they immediately walked it back after the situation was revealed.

Neither a judge, nor a jury, would be so willingly naive.

replies(1): >>munksb+TV1
◧◩◪◨
21. ncalla+lo[view] [source] [discussion] 2024-05-21 02:46:44
>>KHRZ+Td
Because a legal case under the current justice system and legislative framework would probably take hundreds of thousands to millions of dollars to bring a case that requires discovery and a trial to accomplish.

Maybe (maybe!) it’s worth it for someone like Johansson to take on the cost of that to vindicate her rights—but it’s certainly not the case for most people.

If your rights can only be defended from massive corporations by bringing lawsuits that cost hundreds of thousands to millions of dollars, then only the wealthy will have those rights.

So maybe she wants new legislative frameworks around these kind of issues to allow people to realistically enforce these rights that nominally exist.

For an example of updating a legislative framework to allow more easily vindicating existing rights, look up “anti-SLAPP legislation”, which many states have passed to make it easier for a defendant of a meritless lawsuit seeking to chill speech to have the lawsuit dismissed. Anti-SLAPP legislation does almost nothing to change the actual rights that a defendant has to speak, but it makes it much more practical for a defendant to actually excercise those rights.

So, the assumption that a call for updated legislation implies that no legal protection currently exists is just a bad assumption that does not apply in this situation.

◧◩◪
22. justin+to[view] [source] [discussion] 2024-05-21 02:47:34
>>ethbr1+tc
> They likely have a legal position which is defensible.

Doesn't sound like they have that either.

replies(1): >>bonton+fA2
◧◩◪
23. parine+0p[view] [source] [discussion] 2024-05-21 02:52:24
>>ncalla+tj
This is like attributing the crimes of a few fundamentalists to an entire religion.
replies(2): >>ncalla+Ip >>ocodo+Yt
◧◩◪
24. parine+wp[view] [source] [discussion] 2024-05-21 02:57:17
>>voltai+38
> If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

Many things that are legal are of questionable ethics. Asking permission could easily just be an effort for them to get better samples of her voice. Pulling the voice after debuting it is 100% a PR response. If there's a law that was broken, pulling the voice doesn't unbreak it.

replies(1): >>voltai+Er2
◧◩◪◨
25. ncalla+Ip[view] [source] [discussion] 2024-05-21 02:59:14
>>parine+0p
I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.

I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.

So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.

replies(1): >>parine+MQ1
◧◩◪
26. blacko+os[view] [source] [discussion] 2024-05-21 03:24:08
>>chroma+08
Maybe they have second trained on her voice.
◧◩◪◨
27. bradch+qs[view] [source] [discussion] 2024-05-21 03:24:14
>>KHRZ+Td
She has a personal net worth of >$100m. She’s also married to a successful actor in his own right.

Her voice alone didn’t get her there — she did. That’s why celebrities are so protective about how their likeness is used: their personal brand is their asset.

There’s established legal precedent on exactly this—even in the case they didn’t train on her likeness, if it can reasonably be suspected by an unknowing observer that she personally has lent her voice to this, she has a strong case. Even OpenAI knew this, or they would not have asked in the first place.

◧◩
28. visarg+wt[view] [source] [discussion] 2024-05-21 03:37:27
>>nicce+j1
> since it is not exactly identical voice, or even not that close, they can plausibly deny it

When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?

Maybe if they replicated a scene from the A's movies or there was striking likeness between the voices... but not generally.

replies(1): >>nicce+UL
◧◩◪◨
29. ocodo+Yt[view] [source] [discussion] 2024-05-21 03:42:16
>>parine+0p
Effective Altruists are the fundamentalists though. So no, it's not.
◧◩◪
30. anytim+Bu[view] [source] [discussion] 2024-05-21 03:47:31
>>foobar+9k
The whole company behaves like rogue cowboys.

If a PM there didn’t say “fuck it ship it even without her permission” they’d probably be replaced with someone who would.

I expect the cost of any potential legal action/settlement was happily accepted in order to put on an impressive announcement.

◧◩◪
31. XorNot+aw[view] [source] [discussion] 2024-05-21 04:03:59
>>ethbr1+tc
How is the PR position not defensible? One of the worst things you can generally do is admit fault, particularly if you have a complete defense.

Buckle in, go to court, and double-down on the fact that the public's opinion of actors is pretty damn fickle at the best of times - particularly if what you released was in fact based on someone you signed a valid contract with who just sounds similar.

Of course, this is all dependent on actually having a complete defense of course - you absolutely would not want to find Scarlett Johannsen voice samples in file folders associated with the Sky model if it went to court.

replies(1): >>ethbr1+Ay
◧◩◪
32. comp_t+ty[view] [source] [discussion] 2024-05-21 04:28:33
>>ncalla+tj
> When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.

I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.

replies(2): >>ncalla+Ez >>emsign+vB
◧◩◪◨
33. ethbr1+Ay[view] [source] [discussion] 2024-05-21 04:29:45
>>XorNot+aw
In what world does a majority of the public cheer for OpenAI "stealing"* an actress's voice?

People who hate Hollywood? Most of that crowd hates tech even more.

* Because it would take the first news cycle to be branded as that

replies(1): >>XorNot+rK
◧◩◪◨
34. ncalla+Ez[view] [source] [discussion] 2024-05-21 04:39:30
>>comp_t+ty
In some sense I see a direct line between the EA argument being presented here, and the SBF consequentialist argument where he talks about being willing to flip a coin if it had a 50% chance to destroy the world and a 50% chance to make the world more than twice as good.

I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.

I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.

The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.

◧◩
35. emsign+dA[view] [source] [discussion] 2024-05-21 04:45:29
>>ants_e+P7
It looks really unprofessional at minimum if not a bit arrogant, which is actually more concerning as it hints at a deeper disrespect for artists and celebrities.
◧◩
36. emsign+SA[view] [source] [discussion] 2024-05-21 04:51:51
>>gibbit+Ee
Problem is they really believe we either can't tell the difference between a human and an AI model eventually, or they think we don't care. Don't they understand the meaning of art?
◧◩◪◨
37. emsign+vB[view] [source] [discussion] 2024-05-21 04:57:41
>>comp_t+ty
The speed doesn't really matter if their end goal is morally wrong. A slower speed might give them an advantage to not overshoot and get backlash or it gives artists and the public more time to fight back against EA, but it doesn't hide their ill intentions.
◧◩◪
38. kubobl+QB[view] [source] [discussion] 2024-05-21 05:03:01
>>foobar+9k
> a cowboy PM who said fuck it, ship it to make the demo.

Given the timeline it sounds like the PM was told "just go ahead with it, I'll get the permission".

replies(1): >>unrave+nU1
◧◩◪
39. 0xDEAF+JC[view] [source] [discussion] 2024-05-21 05:11:19
>>ncalla+tj
>Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:

https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...

>If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.

>When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...

replies(1): >>ncalla+AF
◧◩◪◨
40. ncalla+AF[view] [source] [discussion] 2024-05-21 05:40:40
>>0xDEAF+JC
I’ve had to explain myself a few times on this, so clearly I communicated badly.

I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.

In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.

> Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.

So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.

I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.

◧◩◪◨⬒
41. XorNot+rK[view] [source] [discussion] 2024-05-21 06:33:44
>>ethbr1+Ay
It is wild to me that on HackerNews of all places, you'd think people don't love an underdog story.

Which is what this would be in the not-stupid version of events: they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".

The issue of course is that OpenAIs reactions so far don't seem to indicate that they're actually confident they can prove this or that this is the case. Coz if this is actually the case, they're going about handling this in the dumbest possible way.

replies(3): >>ml-ano+wN >>Sebb76+z61 >>jubalf+yf1
◧◩◪
42. nicce+UL[view] [source] [discussion] 2024-05-21 06:49:35
>>visarg+wt
> When studios approach an actress A and she refuses, then another actress B takes the role, is that infringing on A's rights? Or should they just scrap the movie?

The scenario would have been that they approach none.

replies(1): >>r2_pil+d33
◧◩◪◨⬒⬓
43. ml-ano+wN[view] [source] [discussion] 2024-05-21 07:02:05
>>XorNot+rK
It’s wild to me that there are people who think that OpenAI are the underdog. A 80Bn Microsoft vassal, what a plucky upstart.

You realise that there are multiple employees including the CEO publicly drawing direct comparisons to the movie Her after having tried and failed twice to hire the actress who starred in the movie? There is no non idiotic reading of this.

replies(1): >>XorNot+TR
◧◩◪◨⬒⬓⬔
44. XorNot+TR[view] [source] [discussion] 2024-05-21 07:57:23
>>ml-ano+wN
You're reading my statements as defending OpenAI. Put on your "I'm the PR department hat" and figure out what you'd do if you were OpenAI given various permutations of the possible facts here.

That's what I'm discussing.

Edit: which is to say, I think Sam Altman may have been a god damn idiot about this, but it's also wild anyone thought that ScarJo or anyone in Hollywood would agree - AI is currently the hot button issue there and you'd find yourself the much more local target of their ire.

replies(1): >>ml-ano+W21
◧◩◪◨⬒⬓⬔⧯
45. ml-ano+W21[view] [source] [discussion] 2024-05-21 09:22:23
>>XorNot+TR
Then why bother mentioning an "underdog story" at all?

Who is the underdog in this situation? In your comment it seems like you're framing OpenAI as the underdog (or perceived underdog) which is just bonkers.

Hacker News isn't a hivemind and there are those of us who work in GenAI who are firmly on the side of the creatives and gasp even rights holders.

◧◩◪◨⬒⬓
46. Sebb76+z61[view] [source] [discussion] 2024-05-21 09:48:35
>>XorNot+rK
> they hired a voice actress for the rights to create the voice, she was paid, and then is basically told by the courts "actually you're unhireable because you sound too much like an already rich and famous person".

There are quite a few issues here: First, this is assuming they actually hired a voice-alike person, which is not confirmed. Second, they are not an underdog (the voice actress might be, but she's most likely pretty unaffected by this drama). Finally, they were clearly aiming to impersonate ScarJo (as confirmed by them asking for permission and samas tweet), so this is quite a different issue than "accidentally" hiring someone that "just happens to" sound like ScarJo.

◧◩◪◨⬒⬓
47. jubalf+yf1[view] [source] [discussion] 2024-05-21 11:07:59
>>XorNot+rK
an obnoxious sleazy millionaire backed by microsoft is by no means “an underdog”
◧◩
48. Nasrud+BO1[view] [source] [discussion] 2024-05-21 14:30:58
>>gibbit+Ee
That is wrong on several levels. First off, it ignores the role of massive researchers. Should we start de-automating processes to employee more people at the cost of worsening margins?
◧◩◪◨⬒
49. parine+MQ1[view] [source] [discussion] 2024-05-21 14:40:02
>>ncalla+Ip
> I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

Did you?

> Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

replies(1): >>ncalla+Im2
◧◩◪◨
50. unrave+nU1[view] [source] [discussion] 2024-05-21 14:58:43
>>kubobl+QB
Ken Segall has a similar Steve Jobs story, he emails Jobs that the Apple legal team have just thrown a spanner in the works days before Ken's agency is set to launch Apple's big ad campaign and what should he do?

Jobs responds minutes later... "Fuck the lawyers."

◧◩◪
51. munksb+pV1[view] [source] [discussion] 2024-05-21 15:03:40
>>voltai+38
> If they didn’t think they needed permission, why ask for permission multiple times and then yank it when she noticed?

One very easy explanation is that they trained Sky using another voice (this is the claim and no reason to doubt it is true) wanting to replicate the stye of the voice in "Her", but would have preferred to use SJ's real voice for the PR impact that could have.

Yanking it could also easily be a pre-emptive response to avoid further PR drama.

You will obvious decide you don't believe those explanations, but to many of us they're quite plausible, in fact I'd even suggest likely.

(And none of this precludes Sam Altman and OpenAI being dodgy anyway)

replies(1): >>voltai+2r2
◧◩◪
52. munksb+TV1[view] [source] [discussion] 2024-05-21 15:05:57
>>ncalla+6n
This is a genuine question. If it turns out they trained Sky on someone else's voice to similarly replicate the style of the voice in "Her", would you be ok with that? If it was proven that the voice was just similar, to SJ's would that be ok?

My view is, of course it is ok. SJ doesn't own the right to a particular style of voice.

◧◩◪◨
53. static+sa2[view] [source] [discussion] 2024-05-21 16:09:18
>>KHRZ+Td
Before Roe vs Wade was overturned you might have asked if abortion is legal why do abortion rights advocates want legislation passed?

The answer is without legislation you are far more subject to whether a judge feels like changing the law.

◧◩◪◨⬒⬓
54. ncalla+Im2[view] [source] [discussion] 2024-05-21 17:09:07
>>parine+MQ1
Sure, I should’ve said I tried to or I intended to:

You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: >>40424566

This is the paragraph that was intended to narrow what I was talking about:

> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.

◧◩◪◨
55. voltai+2r2[view] [source] [discussion] 2024-05-21 17:31:52
>>munksb+pV1
I actually believe that’s quite plausible. The trouble is, by requesting permission in the first place, they demonstrated intent, which is legally significant. I think a lot of your confusion is attempting to employ pure logic to a legal issue. They are not the same thing, and the latter relies heavily on existing precedent — of which you may, it seems, be unaware.
◧◩◪◨
56. voltai+Er2[view] [source] [discussion] 2024-05-21 17:36:05
>>parine+wp
They are trying to wriggle out of providing insight into how that voice was derived at all (like Google with the 100% of damages check). It would really suck for OpenAI if, for example, Altman had at some point emailed his team to ensure the soundalike was “as indistinguishable from Scarlet’s performance in HER as possible.“

Public figures own their likeness and control its use. Not to mention that in this case OA is playing chicken with studios as well. Not a great time to do so, given their stated hopes of supplanting 99% of existing Hollywood creatives.

◧◩◪
57. Intral+Tv2[view] [source] [discussion] 2024-05-21 18:00:06
>>ncalla+tj
Plus, describing this as "speed the rate of life saving ai technology in the hands of everyone" is… A Reach.
◧◩◪
58. Intral+1y2[view] [source] [discussion] 2024-05-21 18:11:44
>>ncalla+tj
The central contention of Effective Altruism, at least in practice if not in principle, seems to be that the value of thinking, feeling persons can be and should be reduced to numbers and objects that you can do calculations on.

Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?

◧◩◪◨
59. bonton+fA2[view] [source] [discussion] 2024-05-21 18:21:29
>>justin+to
Copilot still tells me I've commit a content policy violation of I ask it to generate an image "in Tim Burton's style". Tim Burton has been openly critical of generative AI.
◧◩◪◨
60. r2_pil+d33[view] [source] [discussion] 2024-05-21 20:50:17
>>nicce+UL
Your scenario leads to the disenfranchisement of lesser-known voice talent, since they cannot take on work that Top Talent has rejected if they happen to resemble(nobody here has seen their doppelganger before? How unique is a person? 1 in 5 million? Then there are 36 million of that type in just the United States alone). Perhaps it's time to re-evaluate some of the rules in society. It's a healthy thing to do periodically.
◧◩◪
61. avarun+Ws3[view] [source] [discussion] 2024-05-21 23:14:02
>>chroma+08
They would create a voice in partnership with her? Not sure why you’re assuming it’s some insanely laborious process. It would obviously have been incredible marketing for them to actually get her on board.
[go to top]