zlacker

[parent] [thread] 14 comments
1. menset+(OP)[view] [source] 2024-05-21 00:36:30
Effective altruism would posit that it is worth one voice theft to help speed the rate of life saving ai technology in the hands of everyone.
replies(2): >>ehnto+m8 >>ncalla+S9
2. ehnto+m8[view] [source] 2024-05-21 01:40:38
>>menset+(OP)
It didn't require voice theft, they could have easily found a volunteer or paid for someone else.
3. ncalla+S9[view] [source] 2024-05-21 01:53:21
>>menset+(OP)
Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

Their hubris will walk them right into federal prison for fraud if they’re not careful.

If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

replies(5): >>parine+pf >>comp_t+So >>0xDEAF+8t >>Intral+im2 >>Intral+qo2
◧◩
4. parine+pf[view] [source] [discussion] 2024-05-21 02:52:24
>>ncalla+S9
This is like attributing the crimes of a few fundamentalists to an entire religion.
replies(2): >>ncalla+7g >>ocodo+nk
◧◩◪
5. ncalla+7g[view] [source] [discussion] 2024-05-21 02:59:14
>>parine+pf
I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Frankly, if you’re going to make an “ends justify the means” moral argument, you need to do a lot of work to address how those arguments have gone horrifically wrong in the past, and why the moral framework you’re using isn’t susceptible to those issues. I haven’t seen much of that from Effective Altruists.

I was responding to someone who was specifically saying an EA might argue why it’s acceptable to commit a moral wrong, because the ends justify it.

So, again, if someone is using EA to decide how to direct their charitable donations, volunteer their time, or otherwise decide between mora goods, I have no problem with it. That specifically wasn’t context I was responding to.

replies(1): >>parine+bH1
◧◩◪
6. ocodo+nk[view] [source] [discussion] 2024-05-21 03:42:16
>>parine+pf
Effective Altruists are the fundamentalists though. So no, it's not.
◧◩
7. comp_t+So[view] [source] [discussion] 2024-05-21 04:28:33
>>ncalla+S9
> When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

Extremely reasonable position, and I'm glad that every time some idiot brings it up in the EA forum comments section they get overwhelmingly downvoted, because most EAs aren't idiots in that particular way.

I have no idea what the rest of your comment is talking about; EAs that have opinions about AI largely think that we should be slowing it down rather than speeding it up.

replies(2): >>ncalla+3q >>emsign+Ur
◧◩◪
8. ncalla+3q[view] [source] [discussion] 2024-05-21 04:39:30
>>comp_t+So
In some sense I see a direct line between the EA argument being presented here, and the SBF consequentialist argument where he talks about being willing to flip a coin if it had a 50% chance to destroy the world and a 50% chance to make the world more than twice as good.

I did try to cabin my arguments to Effective Altrusts that are making ends justify the means arguments. I really don’t have a problem with people that are attempting to use EA to decide between multiple good outcomes.

I’m definitely not engaged enough with the Effective Altrusits to know where the plurality of thought lies, so I was trying to respond in the context of this argument being put forward on behalf of Effective Altruists.

The only part I’d say applies to all EA, is the brand taint that SBF has done in the public perception.

◧◩◪
9. emsign+Ur[view] [source] [discussion] 2024-05-21 04:57:41
>>comp_t+So
The speed doesn't really matter if their end goal is morally wrong. A slower speed might give them an advantage to not overshoot and get backlash or it gives artists and the public more time to fight back against EA, but it doesn't hide their ill intentions.
◧◩
10. 0xDEAF+8t[view] [source] [discussion] 2024-05-21 05:11:19
>>ncalla+S9
>Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

There's a fair amount of EA discussion of utilitarianism's problems. Here's EA founder Toby Ord on utilitarianism and why he ultimately doesn't endorse it:

https://forum.effectivealtruism.org/posts/YrXZ3pRvFuH8SJaay/...

>If Effective Altruists want to speed the adoption of AI with the general public, they’d do well to avoid talking about it, lest the general public make a connection between EA and AI

Very few in the EA community want to speed AI adoption. It's far more common to think that current AI companies are being reckless, and we need some sort of AI pause so we can do more research and ensure that AI systems are reliably beneficial.

>When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

The all-time most upvoted post on the EA Forum condemns SBF: https://forum.effectivealtruism.org/allPosts?sortedBy=top&ti...

replies(1): >>ncalla+Zv
◧◩◪
11. ncalla+Zv[view] [source] [discussion] 2024-05-21 05:40:40
>>0xDEAF+8t
I’ve had to explain myself a few times on this, so clearly I communicated badly.

I probably should have said _those_ Effective Altruists are shitty utilitarians. I was attempting—and since I’ve had to clarify a few times clearly failed—to take aim at the effective altruists that would make the utilitarian trade off that the commenter mentioned.

In fact, there’s a paragraph from the Toby Ord blog post that I wholeheartedly endorse and I think rebuts the exact claim that was put forward that I was responding to.

> Don’t act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.

So, my words were too broad. I don’t actually mean all effective altruists are shitty utilitarians. But the ones that would make the arguments I was responding to are.

I think Ord is a really smart guy, and has worked hard to put some awesome ideas out into the world. I think many others (and again, certainly not all) have interpreted and run with it as a framework for shitty utilitarianism.

◧◩◪◨
12. parine+bH1[view] [source] [discussion] 2024-05-21 14:40:02
>>ncalla+7g
> I don’t think so. I’ve narrowed my comments specifically to Effective Altruists who are making utilitarian trade-offs to justify known moral wrongs.

Did you?

> Effective Altruists are just shitty utilitarians that never take into account all the myriad ways that unmoderated utilitarianism has horrific failure modes.

replies(1): >>ncalla+7d2
◧◩◪◨⬒
13. ncalla+7d2[view] [source] [discussion] 2024-05-21 17:09:07
>>parine+bH1
Sure, I should’ve said I tried to or I intended to:

You can see another comment here, where I acknowledge I communicate badly, since I’ve had to clarify multiple times what I was intending: >>40424566

This is the paragraph that was intended to narrow what I was talking about:

> I will say, when EA are talking about where they want to donate their money with the most efficacy, I have no problem with it. When they start talking about the utility of committing crimes or other moral wrongs because the ends justify the means, I tend to start assuming they’re bad at morality and ethics.

That said, I definitely should’ve said “those Effective Altruists” in the first paragraph to more clearly communicate my intent.

◧◩
14. Intral+im2[view] [source] [discussion] 2024-05-21 18:00:06
>>ncalla+S9
Plus, describing this as "speed the rate of life saving ai technology in the hands of everyone" is… A Reach.
◧◩
15. Intral+qo2[view] [source] [discussion] 2024-05-21 18:11:44
>>ncalla+S9
The central contention of Effective Altruism, at least in practice if not in principle, seems to be that the value of thinking, feeling persons can be and should be reduced to numbers and objects that you can do calculations on.

Maybe there's a way to do that right. I suppose like any other philosophy, it ends up reflecting the personalities and intentions of the individuals which are attracted to and end up adopting it. Are they actually motivated by identifying with and wanting to help other people most effectively? Or are they just incentivized to try to get rid of pesky deontological and virtue-based constraints like empathy and universal rights?

[go to top]