zlacker

[parent] [thread] 8 comments
1. pj_muk+(OP)[view] [source] 2025-06-03 04:45:42
"It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze."

I keep hearing this but have yet to find a good resource to study the issues. Most of what I've read so far falls into two buckets:

"It'll hijack our minds via Social Media" - in which case Social Media is the original sin and the problem we should be dealing with, not AI.

or

"It'll make us obsolete" - I use the cutting edge AI, and it will not, not anytime soon. Even if it does, I don't want to be a lamplighter rioting, I want to have long moved on.

So what other good theories of safety can I read? Genuine question.

replies(4): >>intend+i1 >>shwouc+pd >>TomasB+Md >>TomasB+0e
2. intend+i1[view] [source] 2025-06-03 05:02:26
>>pj_muk+(OP)
> Research we published earlier this year showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Perhaps even more worryingly, our new research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates

Bruce Scheneir, May 2024

https://www.schneier.com/academic/archives/2024/06/ai-will-i...

I am seeing a stream of comments on Reddit that are entirely ai driven, and even bots which are engaging in conversations. Worst case scenarios I’m looking at will mean it’s better to assume everyone online is a bot.

I know of cases where people have been duped into buying stocks because of an AI generated version of a publicly known VP of a financial firm.

Then there’s the case where someone didn’t follow email hygiene and got into a zoom call with what appeared to be their CFO and team members, and transferred several million dollars out of the firm.

And it’s only 2-3 years into this lovely process. The future is so bleak that just talking about this with people not involved with looking at these things call it nihilism.

It’s so bad that talking about it is like punching hope.

replies(1): >>kamaal+r6
◧◩
3. kamaal+r6[view] [source] [discussion] 2025-06-03 05:52:38
>>intend+i1
At some point trust will break down to a point, you will actually only believe things from a real human with a badge(talking to them in person).

For that matter, My email has been /dev/null for a while now, and unless I have spoken to a person over phone and expect their email, I don't even check my inbox. Facebook/Instagram account is largely used as a photo back up service, plus online directory. And Twitter is for news.

I mostly don't trust anything that comes online, unless I already have verified the other party is somebody Im familiar with and even then only through the established means of communication we both have agreed to.

I do believe reddit, quora, leet code et al, will largely be reduced /dev/null spaces very soon.

replies(1): >>intend+59
◧◩◪
4. intend+59[view] [source] [discussion] 2025-06-03 06:23:02
>>kamaal+r6
The issue is that you can say they but as an agglomeration of individuals - society can’t say that.

There was a direct benefit from digitization and being able to trust digital video and information that allowed nations to deliver services.

Trust was a public good. Factual information cheaply produced and disseminated was a public good.

Those are now more expensive because the genAI content easily surpasses any cheap bullshit filter.

It also ends up undermining faith in true content, which may be outlandish.

I saw an image of a penny hitch on Reddit and I have no idea if it’s real or not without having to check anymore.

replies(1): >>kamaal+ej
5. shwouc+pd[view] [source] 2025-06-03 07:05:15
>>pj_muk+(OP)
try to find a date on a dating app, you will experience firsthand
6. TomasB+Md[view] [source] 2025-06-03 07:08:37
>>pj_muk+(OP)
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.

That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].

So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.

> "Social Media is the original sin"

In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.

> "I want to have long moved on."

Only if you have where to go. Others may not be that mobile or lucky.

[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.

[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.

7. TomasB+0e[view] [source] 2025-06-03 07:11:55
>>pj_muk+(OP)
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.

That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].

So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.

> "Social Media is the original sin"

In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.

> "I want to have long moved on."

Only if you have where to go. Others may not be that mobile or lucky. If autonomous trucks can make the trucking profession obsolete, it's questionable how quickly can truckers "move on".

[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.

[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.

◧◩◪◨
8. kamaal+ej[view] [source] [discussion] 2025-06-03 08:05:18
>>intend+59
>>It also ends up undermining faith in true content, which may be outlandish.

In all honesty, art in some form or the other has always been simulated to some extent. Heck, the whole idea of a story, even if in a book is something you know hasn't happened in real life, but you are willing to suspend belief for a while to be entertained. This is the essence of all entertainment. It is not real, but it makes you feel good.

Like action movies have had cgi, cartoon shows, magic shows and even actors putting on make up itself can be considered deviation from truth.

I guess your idea is that news can be manufactured and one could rig public opinion to all sorts of bad things. But again, once you are here, a good amount of public already knows this to be false enough to be wary of it. Come to think of it, a lot of news is already heavily edited to a point it doesn't represent the original story. This is just a continuation of the same.

replies(1): >>intend+mv
◧◩◪◨⬒
9. intend+mv[view] [source] [discussion] 2025-06-03 10:16:02
>>kamaal+ej
Theres 2 (ish) things at play here. The first being an inherent problem with our information economy - the fact that News is competing against Entertainment. Reducing this further is Accurate content, cost effective against Inaccurate content for revenue and profit.

The second issue at play here is the level of effort required to spoof content and its flip side - the level of effort required to verify accuracy of content.

I am talking about the second issue: Effectively our ability to suss out what is real is now insufficient. Is the person you are talking to in the comments a bot? Is that short message from a human? interesting historic fact true? Did people really do that? That can’t be real, can it?

I am concerned that it used to take X amount of time and effort to check if something was valid, or Y amount of time to create a convincing facsimile.

The issue is that since Y is much lower, when something outlandish shows up, it takes more time and effort to check if it’s true. For example, I could look at an image and at a glance tell if it was fake. Now I can’t. This means that theres a whole swathe of content that I cannot trust anymore, unless I am willing to make a decent chunk of effort to verify it.

This means I am also less invested in public groups or communities, because they are likely to be filled with bots. My posture is to be more guarded and suspicious.

Extended to the entire ecosystem, and this becomes the dystopian worst case scenario, and that voice asking for help in some corner of the net, is likely a mimic and not an adventurer who needs help.

I am not too concerned about rigging popular opinion, because that process has already been discovered (I’ll plug Network Propaganda again).

[go to top]