zlacker

[return to "My AI skeptic friends are all nuts"]
1. habosa+VM[view] [source] 2025-06-03 03:51:46
>>tablet+(OP)
I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.

Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.

Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.

And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.

So I’ll keep being skeptical, until it’s over.

◧◩
2. broken+eP[view] [source] 2025-06-03 04:22:02
>>habosa+VM
The downsides you list aren’t specific to AI. Globalization and automation have destabilized labor markets. A small handful of billionaires control most major social media platforms and have a huge influence on politics. Other types of technology, particularly crypto, use large amounts of energy for far more dubious benefits.

AI is just the latest in a long list of disruptive technologies. We can only guess about the long term ramifications. But if history is any indicator, people in a few decades will probably see AI as totally normal and will be discussing the existential threat of something new.

◧◩◪
3. jychan+jP[view] [source] 2025-06-03 04:23:33
>>broken+eP
Well, duh. Same thing applies for "Technology X can be used for war". But anyone with a brain can see nukes are on a different level than bayonets.

Claiming AI isn't unique in being a tool for evil isn't interesting, the point is that it's a force multiplier as such.

◧◩◪◨
4. broken+6R[view] [source] 2025-06-03 04:39:23
>>jychan+jP
Every new technology is a greater force multiplier, with potential to be used for good or evil. That’s literally the point of technological advancement. Even nuclear bomb technology has a more positive side in nuclear reactors, radiotherapy, etc.
◧◩◪◨⬒
5. jychan+sS[view] [source] 2025-06-03 04:56:25
>>broken+6R
Yeah, that's exactly completely missing the point. A bayonet multiplies a person's power by 1.1x, a nuke multiplies it by more than 1,000,000x. Trying to be cute and lumping them together as "every technology is a force multiplier" is peak autistic literalism of 1.1x and 1,000,000x both technically being a multiplier even if they're clearly different.
◧◩◪◨⬒⬓
6. broken+GW[view] [source] 2025-06-03 05:38:17
>>jychan+sS
You are using the nuke example as a straw man since the negative impact is clear, but most technology is not so clear cut. We have machines that can do the work of 1,000,000 men, for example. The magnitude of technological advance alone doesn’t inherently mean something is good or bad.

AI is a large leap forward in capability and will likely have significant impacts on society. But it’s far from clear that it will have disproportionate negative impacts like a nuke. More likely it will have benefits and downsides similar to numerous other modern technologies.

[go to top]