zlacker

[return to "My AI skeptic friends are all nuts"]
1. habosa+VM[view] [source] 2025-06-03 03:51:46
>>tablet+(OP)
I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.

Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.

Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.

And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.

So I’ll keep being skeptical, until it’s over.

◧◩
2. simonw+LO[view] [source] 2025-06-03 04:17:08
>>habosa+VM
"And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI."

I firmly believe that too. That's why I've been investing a great deal of effort in helping people understand what this stuff can and can't do and how best to make use of it.

I don't think we can stop it, but I do think (hope) we can show people how to use it in a way where the good applications outweigh the bad.

◧◩◪
3. abraae+CQ[view] [source] 2025-06-03 04:34:25
>>simonw+LO
> I don't think we can stop it, but I do think (hope) we can show people how to use it in a way where the good applications outweigh the bad.

That feels idealistic. About as realistic as telling people how to use semiconductors or petrochemicals for good instead of bad.

No-one knows where AI is going but one thing you can be sure of - the bad actors don't give two hoots what we think, and they will act in their own interests as always. And as we see from historical events, there are still many, many bad actors around. And when the bad actors do bad things with the technology, the good actors have no choice but to react.

◧◩◪◨
4. atemer+HZ[view] [source] 2025-06-03 06:10:52
>>abraae+CQ
The only way to fight bad actors using the technology is good actors using the technology.

You can write walls of texts about ethics and social failure. Bad actors won't care.

You can tell everyone that some technology is bad and everyone should stop using it. Some good people will listen to you and stop. Bad actors won't stop, and they will have technological edge.

You can ask politicians for regulation. However, your government might be a bad actor just as well (and recently we had a fine demonstration). They will not regulate in the interests of good people. They will regulate for what stakeholders want. Common people are never stakeholders.

If you want to stop bad actors doing bad things with AI: learn AI faster and figure out how to use AI to stop AI. This is the only way to fly.

[go to top]