It's already happening [0].
Stackoverflow recently banned generated responses [1].
We're facing a new karma-generating strategy and, IMO, a policy is urgently needed.
[0]: https://news.ycombinator.com/threads?id=clay-dreidels
[1]: https://stackoverflow.com/help/gpt-policy
Once it's past the peak bear it in mind as a possibility, and when you can't tell it won't much matter: https://xkcd.com/810/
A part of me felt quite chuffed to be accused of being the current hottest new shiny in tech. Another part of me - the poet part - felt humiliated.
If a ChatGPT comment ban does get put in place, please don't also auto-ban me by accident. I don't enjoy being collateral damage.
Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:
>>33911426 (Dec 2022)
>>32571890 (Aug 2022)
>>27558392 (June 2021)
>>26693590 (April 2021)
>>24189762 (Aug 2020)
>>22744611 (April 2020)
>>22427782 (Feb 2020)
>>21774797 (Dec 2019)
>>19325914 (March 2019)
We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.
The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.
Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.
* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.
See the app: https://huggingface.co/openai-detector/ - it gives a response as % chance it's genetic or chat bot.
https://en.m.wikipedia.org/wiki/Pierre_Menard,_Author_of_the...
Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.
Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.
A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.
This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.
We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.
[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...
The excessive use of mobile apps on smartphones has been linked to addiction and a range of negative effects on mental and physical health [0]. Should HN consider banning the use of mobile apps on smartphones on its platform in order to promote a healthier and more focused environment for discussions?
[0] : https://www.cnn.com/2019/07/01/health/cell-phone-ban-schools...
1: https://twitter.com/levelsio/status/1600232199243984897
2: https://twitter.com/levelsio/status/1600246753348882432
3: https://twitter.com/dannypostmaa/status/1600372062958538752
https://www.vice.com/en/article/wxnaem/stack-overflow-bans-c...
I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)
Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:
https://www.theatlantic.com/technology/archive/2022/12/chatg...
https://music.youtube.com/watch?v=bpRRVS1ci40&list=RDAMVMbpR...
Maybe I was just unlucky with the comment I tried it with (took the longest one I saw in my history), but I don't think I would have liked seeing it either removed or spat at for being considered as "AI generated"...
The detector also thinks this comment is fake. Seems influenced by flavors of mistakes.
Idiomatic ones. Spelling ones. Grammar. All non-native speakers will easily get flagged. Does not look spot-on for now. Checked all those assertion live-typing on the demo. 0.09% real.
Recent example was https://news.ycombinator.com/item?id=33931384 about cash limits - Sooo many comments are just "Tyranny!", "EU bad!" and overall unmitigated cynicism.
Israel participates in state sponsored propaganda as well. https://www.smh.com.au/technology/israeli-propaganda-war-hit...
But I still end up in Telegram because of thumbnails.
XKCD 810: https://xkcd.com/810/
I disagree - it can't even do basic logic/maths reliably. See this thread: https://news.ycombinator.com/item?id=33859482
Someone in that thread also gave an example of ChatGPT saying that 3 * pi is an irrational number while 25 * pi is a rational number... Two quotes by ChatGPT:
> when you multiply an irrational number by a rational number (in this case, 3), the result is still an irrational number.
> when you multiply a rational number by an irrational number, the result is a rational number.
Shameless plug: https://notes.ghed.in/posts/2022/content-machine-revolution/
[1] https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...
I have been sounding the alarm for a while now (several years) about online bots.
Policies can’t work if you can’t enforce them. There are several issues:
1) You won’t really know whether accounts are posting bot content or not. They can be trained on existing HN text.
2) Looking for patterns such as posting “one type of comment” or “frequently posting” can be defeated by a bot which makes many styles of comments or is focused on the styles of a few popular users.
3) Swarms of bots can eke out karma here and there but collectively can amass far more karma over time. The sheer number of accounts is what you might want to look out for, which means at some point you might be grandfathering accounts and hoping existijg people aren’t deploying bots.
4) Swarms of bots can be deployed to mimic regular users and amass karma as sleepers over time (months or years) and then finally be deployed to change public opinion on HN, downvote others or perform reputational attacks to gradually oust “opponents” of an idea.
5) It’s you vs a large number of people and an endless number of bot instances trained on years of actual HN posts and data, plus myriad internet postings, and optimized for “automated helpful comments”. In other words, “mission fucking accomplished” from this xkcd is actually your worst nightmare (and that of Zuck, Musk) https://xkcd.com/810/
6) LinkedIn already has a problem of fake accounts applying for jobs, or fake jobs etc. This year we have seen the rise of profiles with totally believable deepfaked photos, copied resumes and backstories etc. https://en.m.wikipedia.org/wiki/On_the_Internet,_nobody_know...
7) At least for the next few years, you could call someone up and interview them but now all that’s left is to deepfake realtime audio / video with GPT-4 chat generation
8) Trying to catch individual accounts using a bot occasionally over the internet is like trying to catch someone using a chess or poker engine for a few moves each game.
9) Reading comments and even articles is NOT a Turing test. It is not interactive and most people simply skim the text. Even if they didn’t, the bots can pass a rudimentary Turing test applied by many people. But in fact, they don’t need to. They can do it at scale.
10) Articles are currently hosted by publications like nytimes and wall st journal and informational videos by popular youtube channels, but in the next 5-10 years you’ll see the rise of some weird no-name groups (like Vox or Vice News was once) that amasses far more shares than all human -generated content publications. Human publications might even deploy bots too. You already see MSN do it. But even if they don’t, the number of reshares is a metric that is easily optimized for, by A/B testing and bots, and has been for a decade.
But it actually gets worse:
11) Most communities — including HN - will actually prefer bots if they can’t tell who is a bot. Bots won’t cuss, will make helpful comments and add insight, and will follow the rules. The comments may be banal now but the swarm can produce a wide variation which can range from opinionated to not.
12) Given that, even private insular online communities will eventually be overrun by bots, and prefer them. First the humans will upvote bots and then the bots will upvote bots.
Human content in all communities will become vanishingly small, and what is shared will be overwhelmingly likely to be bot-generated.
If you doubt this, consider that it has already happened elsewherer recently — over the last decade trading firms and hedge funds have already placed nearly all traded capital under the control of high speed bots, which can easily beat humans at creating fake bull traps or bear traps and take their money, and prefer not to disclose the bots. You already prefer Google Maps to asking for directions. Children prefer Googling and Binging to asking your own parents. And around the world, both parents prefer working for corporations to spending time with their own children, sticking them in public schools. It’s considered self-actualization for everyone. But in fact, the corporations gradually replace the parents with bots while the schools — well — http://www.paulgraham.com/nerds.html
The bots could act well for a while and then swarms can be deployed to create unprecedented misinformation, reputational attacks (lasting for years and look organic) and nudge public consensus towards anything, real or fake, such as encouraging drastic policy changes or approve billions for some industry.
In other words … you’ll learn to love your botswarms. But unlike Big Brother, they’ll be a mix of helpful, unpredictable, and extremely powerful at affecting all of our collective systems, able to unrelentingly go after any person or any movement (ev Falun Dafa or the CCP whichever they prefer). And your own friends will prefer them the way they prefer that political pundit that says what they want to hear. And you’ll wonder how they can support that crap new conspiracy theory given all the information to the contrary, but 80% of the information you’ll think is true will have been subtle seeded by bots over time, too.
Today, we explore what 1 poker bot would do at a table of 9 people. But we are absolutely unprepared for what swarming AI will do online. It can do all this by simply adding swarming collusion capability to existing technology! Nothing more needs to even be developed!
https://en.m.wikipedia.org/wiki/Philosophical_zombie
That’s the thing, if we truly understand conscience, we may have a shot at verifying if it’s answerable in the abstract. By simply replicating its effects, we are dodging the question.
Ahem.
Anyways, Searle's take has been out for a while: https://en.wikipedia.org/wiki/Chinese_room
Also, people used to look up random I-Ching or Bible verses for guidance. It's probably in the brain of the beholder.
Somewhere else someone pointed out that using AI to reformulate our thoughts while masking our own style is a possible protection for our anonymity considering the kind of threat showed in this post: https://news.ycombinator.com/item?id=33755016 . This should seriously be taken into account.
(It says "From the ChatGPT-generated stuff I've seen just in the last week, I think we're already there. Most humans these days are incredibly stupid.")
I have read low quality internet comments saying "people are dumb" over and over and over, year in, year out. I argue that wherever they are, they have no inherent positive value. And negative contribution to the internet, the world, the thread they are posted in.
Stories with "ChatGPT" in the title have spent over 300 hours on HN's frontpage so far. Of course, everyone sees a different sampling, but if you feel that it has "barely made it to the frontpage", your sample must be quite an outlier!
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(It's much more common for people to have seen so much of it that allergic reactions like https://news.ycombinator.com/item?id=33880024 start breaking out)
I believe that in the not too distant future there will be pressure to use these "magic" AIs to be applied everywhere, and this pressure will probably not look very hard at whether the AI is good at math or not. Just look at all the pseudoscience in the criminal system [3]. I believe this poses a real problem, so keeping hareping on this is probably the right response.
[1] https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso... [2] https://www.weforum.org/agenda/2018/11/algorithms-court-crim...
[3] https://www.bostonreview.net/articles/nathan-robinson-forens...