zlacker

[return to "My AI skeptic friends are all nuts"]
1. habosa+VM[view] [source] 2025-06-03 03:51:46
>>tablet+(OP)
I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.

Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.

Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.

And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.

So I’ll keep being skeptical, until it’s over.

◧◩
2. pj_muk+FR[view] [source] 2025-06-03 04:45:42
>>habosa+VM
"It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze."

I keep hearing this but have yet to find a good resource to study the issues. Most of what I've read so far falls into two buckets:

"It'll hijack our minds via Social Media" - in which case Social Media is the original sin and the problem we should be dealing with, not AI.

or

"It'll make us obsolete" - I use the cutting edge AI, and it will not, not anytime soon. Even if it does, I don't want to be a lamplighter rioting, I want to have long moved on.

So what other good theories of safety can I read? Genuine question.

◧◩◪
3. intend+XS[view] [source] 2025-06-03 05:02:26
>>pj_muk+FR
> Research we published earlier this year showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Perhaps even more worryingly, our new research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates

Bruce Scheneir, May 2024

https://www.schneier.com/academic/archives/2024/06/ai-will-i...

I am seeing a stream of comments on Reddit that are entirely ai driven, and even bots which are engaging in conversations. Worst case scenarios I’m looking at will mean it’s better to assume everyone online is a bot.

I know of cases where people have been duped into buying stocks because of an AI generated version of a publicly known VP of a financial firm.

Then there’s the case where someone didn’t follow email hygiene and got into a zoom call with what appeared to be their CFO and team members, and transferred several million dollars out of the firm.

And it’s only 2-3 years into this lovely process. The future is so bleak that just talking about this with people not involved with looking at these things call it nihilism.

It’s so bad that talking about it is like punching hope.

[go to top]