I have been sounding the alarm for a while now (several years) about online bots.
Policies can’t work if you can’t enforce them. There are several issues:
1) You won’t really know whether accounts are posting bot content or not. They can be trained on existing HN text.
2) Looking for patterns such as posting “one type of comment” or “frequently posting” can be defeated by a bot which makes many styles of comments or is focused on the styles of a few popular users.
3) Swarms of bots can eke out karma here and there but collectively can amass far more karma over time. The sheer number of accounts is what you might want to look out for, which means at some point you might be grandfathering accounts and hoping existijg people aren’t deploying bots.
4) Swarms of bots can be deployed to mimic regular users and amass karma as sleepers over time (months or years) and then finally be deployed to change public opinion on HN, downvote others or perform reputational attacks to gradually oust “opponents” of an idea.
5) It’s you vs a large number of people and an endless number of bot instances trained on years of actual HN posts and data, plus myriad internet postings, and optimized for “automated helpful comments”. In other words, “mission fucking accomplished” from this xkcd is actually your worst nightmare (and that of Zuck, Musk) https://xkcd.com/810/
6) LinkedIn already has a problem of fake accounts applying for jobs, or fake jobs etc. This year we have seen the rise of profiles with totally believable deepfaked photos, copied resumes and backstories etc. https://en.m.wikipedia.org/wiki/On_the_Internet,_nobody_know...
7) At least for the next few years, you could call someone up and interview them but now all that’s left is to deepfake realtime audio / video with GPT-4 chat generation
8) Trying to catch individual accounts using a bot occasionally over the internet is like trying to catch someone using a chess or poker engine for a few moves each game.
9) Reading comments and even articles is NOT a Turing test. It is not interactive and most people simply skim the text. Even if they didn’t, the bots can pass a rudimentary Turing test applied by many people. But in fact, they don’t need to. They can do it at scale.
10) Articles are currently hosted by publications like nytimes and wall st journal and informational videos by popular youtube channels, but in the next 5-10 years you’ll see the rise of some weird no-name groups (like Vox or Vice News was once) that amasses far more shares than all human -generated content publications. Human publications might even deploy bots too. You already see MSN do it. But even if they don’t, the number of reshares is a metric that is easily optimized for, by A/B testing and bots, and has been for a decade.
But it actually gets worse:
11) Most communities — including HN - will actually prefer bots if they can’t tell who is a bot. Bots won’t cuss, will make helpful comments and add insight, and will follow the rules. The comments may be banal now but the swarm can produce a wide variation which can range from opinionated to not.
12) Given that, even private insular online communities will eventually be overrun by bots, and prefer them. First the humans will upvote bots and then the bots will upvote bots.
Human content in all communities will become vanishingly small, and what is shared will be overwhelmingly likely to be bot-generated.
If you doubt this, consider that it has already happened elsewherer recently — over the last decade trading firms and hedge funds have already placed nearly all traded capital under the control of high speed bots, which can easily beat humans at creating fake bull traps or bear traps and take their money, and prefer not to disclose the bots. You already prefer Google Maps to asking for directions. Children prefer Googling and Binging to asking your own parents. And around the world, both parents prefer working for corporations to spending time with their own children, sticking them in public schools. It’s considered self-actualization for everyone. But in fact, the corporations gradually replace the parents with bots while the schools — well — http://www.paulgraham.com/nerds.html
The bots could act well for a while and then swarms can be deployed to create unprecedented misinformation, reputational attacks (lasting for years and look organic) and nudge public consensus towards anything, real or fake, such as encouraging drastic policy changes or approve billions for some industry.
In other words … you’ll learn to love your botswarms. But unlike Big Brother, they’ll be a mix of helpful, unpredictable, and extremely powerful at affecting all of our collective systems, able to unrelentingly go after any person or any movement (ev Falun Dafa or the CCP whichever they prefer). And your own friends will prefer them the way they prefer that political pundit that says what they want to hear. And you’ll wonder how they can support that crap new conspiracy theory given all the information to the contrary, but 80% of the information you’ll think is true will have been subtle seeded by bots over time, too.
Today, we explore what 1 poker bot would do at a table of 9 people. But we are absolutely unprepared for what swarming AI will do online. It can do all this by simply adding swarming collusion capability to existing technology! Nothing more needs to even be developed!