zlacker

[parent] [thread] 0 comments
1. Intral+(OP)[view] [source] 2024-05-15 23:20:14
Idk man, I'm too busy being terrified of the use of LLMs as propaganda agents, micro-targetting adtech vectors, mass gaslighters and cultural homogenizers.

I mean, these things are literally designed to statelessly yet convincingly talk about events they can't see, experiences they can't understand, emotions they can't feel… If a human acted like that, we'd call them a psychopath.

We already know that our social structures tend to be quite vulnerable to dark triad type personalities. And yet, while human psychopaths are limited by genetics to a small percentage of the population, there's no limit on the number of spambot instances you can instruct to attack your political rivals, Alexa 2.0 updates that could be pushed to sound 5% sadder when talking about a competitor's products, LLM moderators that can be deployed to subtly correct "organic" interactions that leave a known profitable state space… And that's just the obvious next steps from where we're already at today. I'm sure the real use cases for automated lying machines will be more horrifying than most of us could imagine today, just as nobody could have predicted in 2010 that Twitter and Facebook would enable ISIS, Trump, unconsensual mass human experimentation, the Rohingya genocide…

Which is to say, selling LLM "friends" or "girlfriends" as a way to addictively exploit people's loneliness seems like one of the least harmful things that could come out of the current "AI" push. Sad, yes, but compared to where I think this is headed, that seems like dodging a bullet.

> I'm so sick of startups taking advantage of people. So, so fucking gross.

Silicon Valley was a mistake. An entire industry controlled largely by humans that decided they like predictable programmable machines more than they like free and equal persons. What was the expected outcome?

[go to top]