zlacker

[return to "My AI skeptic friends are all nuts"]
1. lolind+qq[view] [source] 2025-06-02 23:57:50
>>tablet+(OP)
> Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.

This kind of guilt-by-association play might be the most common fallacy in internet discourse. None of us are allowed to express outrage at the bulk export of GitHub repos with zero regard for their copyleft status because some members of the software engineering community are large-scale pirates? How is that a reasonable argument to make?

The most obvious problem with this is it's a faulty generalization. Many of us aren't building large-scale piracy sites of any sort. Many of us aren't bulk downloading media of any kind. The author has no clue whether the individual humans making the IP argument against AI are engaged in piracy, so this is an extremely weak way to reject that line of argument.

The second huge problem with this argument is that it assumes that support for IP rights is a blanket yes/no question, which it's obviously not. I can believe fervently that SciHub is a public good and Elsevier is evil and at the same time believe that copyleft licenses placed by a collective of developers on their work should be respected and GitHub was evil to steal their code. Indeed, these two ideas will probably occur together more often than not because they're both founded in the idea that IP law should be used to protect individuals from corporations rather than the other way around.

The author has some valid points, but dismissing this entire class of arguments so flippantly is intellectually lazy.

◧◩
2. mattl+Ut[view] [source] 2025-06-03 00:28:01
>>lolind+qq
I’m a free software developer and have been for over 25 years. I’ve worked at many of the usual places too and I enjoy and appreciate the different licenses used for software.

I’m also a filmmaker and married to a visual artist.

I don’t touch this stuff at all. It’s all AI slop to me. I don’t want to see it, I don’t want to work with it or use it.

◧◩◪
3. xpe+tG[view] [source] 2025-06-03 02:31:11
>>mattl+Ut
Some people make these kinds of claims for ethical reasons, I get it. But be careful to not confuse one’s ethics with the current state of capability, which changes rapidly. Most people have a tendency to rationalize, and we have to constantly battle it.

Without knowing the commenter above, I’ll say this: don’t assume an individual boycott is necessarily effective. If one is motivated by ethics, I think it is morally required to find effective ways to engage to shape and nudge the future. It is important to know what you’re fighting for (and against). IP protection? Human dignity through work? Agency to effect one’s life? Other aspects? All are important.

◧◩◪◨
4. taurat+7T[view] [source] 2025-06-03 05:05:30
>>xpe+tG
"Morally required to ... engage" with technologies that one disagrees with sounds fairly easily debunk-able to me. Everyone does what they can live with - being up close and personal, in empathy with humans who are negatively effected by a given technology, they can choose to do what they want.

Who knows, we might find out in a month that this shit we're doing is really unsafe and is a really bad idea, and doesn't even work ultimately for what we'd use it for. LLMs already lie and blackmail.

◧◩◪◨⬒
5. xpe+3A1[view] [source] 2025-06-03 12:15:59
>>taurat+7T
Five points. First, a moral code is a guidestar, principles to strive for, but not necessarily achieved.

Second. People can do what they want? This may not even be self-consistent. Humans are complex and operate imperfectly across time horizons and various unclear and even contradictory goals.

Third. Assuming people have some notion of consistency in what they want, can people can do what they want? To some degree. But we live in a world of constraints. Consider this: if one only does what one wants, what does that tell you? Are they virtuous? Maybe, maybe not: it depends on the quality of their intentions. Or consider the usual compromising of one’s goals: people often change what they want to match what is available. Consider someone in jail, a parent who lost a child, a family in a war zone, or someone who isn’t able to get the opportunity to live up to their potential.

Fourth, based on #3 above, we probably need to refine the claim to say this: people strive to choose the best action available to them. But even in this narrower framing, saying “people do what they can” seems suspect to me, to the point of being something we tell ourselves to feel better. On what basis can one empirically measure how well people act according to their values? I would be genuinely interested in attempts to measure this.

Fifth, here is what I mean by engaging with a technology you disagree with: you have to engage in order to understand what are you are facing. You should clarify and narrow your objections: what aspects of the technology are problematic? Few technologies are intrinsically good or evil; it is usually more about how they are used. So mindfully and selectively use the technology in service of your purposes. (Don’t protest the wrong thing out of spite.) Focus on your goals and make the hard tradeoffs.

Here is an example of #5. If one opposes urban development patterns that overemphasize private transit, does this mean boycotting all vehicles? Categorically refusing to rent a car? That would miss the point. Some of one’s best actions involve getting involved in local politics and advocacy groups. Hoping isolated individual action against entrenched interests will move the needle is wishful thinking. My point is simple: choose effective actions to achieve your goals. Many of these goals can only be achieved with systematic thinking and collective action.

◧◩◪◨⬒⬓
6. taurat+v33[view] [source] 2025-06-03 21:13:27
>>xpe+3A1
Just responding to 5 here, as I think the rest is a capable examination but I think starts to move around the point I'm trying to make, that I disagree that one morally has to engage with AI. Its not just to "understand what you are facing" - that's a tactical choice, not a moral one. Its just not a moral imperative. Non-engagement can be a protest as well. Its one of the ways that the overton window maintains itself - if someone were to take the, to me, extreme view that AI/LLMs will within the next 5 years result in massive economic changes and eliminate much of society's need for artists or programmers, I choose not to engage with that view and give it light. I grew up around doomsayers and those who claim armageddon, and the arguments being made are often on similar ground. I think they're kooks who don't give a fuck about the consequences of their acceleration-ism, they're just chasing dollars.

Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.

◧◩◪◨⬒⬓⬔
7. xpe+lC5[view] [source] 2025-06-04 19:37:11
>>taurat+v33
> Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.

If by the last "it" you mean "the hype", then I agree.

But -- sorry if I'm repeating -- I don't agree with conflating the tools themselves with the hype about them. It is fine to not engage with the hype. But it is unethical to boycott LLM tooling itself when it could serve ethical purposes. For example, many proponents of AI safety recommend using AI capabilities to improve AI safety research.

This argument does rely on consequentialist reasoning, which certainly isn't the only ethical game in town. That said, I would find it curious (and probably worth unpacking / understanding) if one claimed deontological reasons for avoiding a particular tool, such as an LLM (i.e. for intrinsic reasons). To give an example, I can understand how some people might say that lying is intrinsically wrong (though I disagree). But I would have a hard time accepting that _using_ an LLM is intrinsically wrong. There would need to be deeper reasons given: correctness, energy usage, privacy, accuracy, the importance of using one's own mental faculties, or something plausible.

[go to top]