zlacker

Ask HN: Should HN ban ChatGPT/generated responses?

submitted by djtrip+(OP) on 2022-12-11 18:06:36 | 538 points 642 comments
[source] [go to bottom]

It's already happening [0].

Stackoverflow recently banned generated responses [1].

We're facing a new karma-generating strategy and, IMO, a policy is urgently needed.

[0]: https://news.ycombinator.com/threads?id=clay-dreidels

[1]: https://stackoverflow.com/help/gpt-policy


NOTE: showing posts with links only show all posts
3. dcmint+z1[view] [source] 2022-12-11 18:14:57
>>djtrip+(OP)
I agree that it's annoying, but the fad will mostly pass, just like the spike in generated images has tailed off again.

Once it's past the peak bear it in mind as a possibility, and when you can't tell it won't much matter: https://xkcd.com/810/

22. rikroo+t7[view] [source] 2022-12-11 18:47:55
>>djtrip+(OP)
One of my comments, in another thread, got called out for being a ChatGPT-generated response[1]. It wasn't; I wrote that comment without any artificial assistance.

A part of me felt quite chuffed to be accused of being the current hottest new shiny in tech. Another part of me - the poet part - felt humiliated.

If a ChatGPT comment ban does get put in place, please don't also auto-ban me by accident. I don't enjoy being collateral damage.

[1] https://news.ycombinator.com/item?id=33886209

26. holler+o9[view] [source] 2022-12-11 18:58:21
>>djtrip+(OP)
Clickable version of the links in the OP:

[0]: https://news.ycombinator.com/threads?id=clay-dreidels

[1]: https://stackoverflow.com/help/gpt-policy

50. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩◪
67. matthb+do1[view] [source] [discussion] 2022-12-12 04:44:12
>>ramraj+Nl1
There is an xkcd comic about this (of course):

#810 Constructive: https://xkcd.com/810/

◧◩◪
72. virapt+0p1[view] [source] [discussion] 2022-12-12 04:54:14
>>carboc+8n1
I think the message was claiming something else, specifically that each classification was given a score of how confident the model was in the answer and the answers were given 99.9%+ in those cases.

See the app: https://huggingface.co/openai-detector/ - it gives a response as % chance it's genetic or chat bot.

◧◩◪
132. kdazzl+hv1[view] [source] [discussion] 2022-12-12 05:59:43
>>im3w1l+7p1
Very interesting point. It really reminds me of that Borges story where someone in the 20th century rewrites Don Quixote word for word, and the critics think it’s far better than the original.

https://en.m.wikipedia.org/wiki/Pierre_Menard,_Author_of_the...

◧◩◪◨
162. Kim_Br+Ty1[view] [source] [discussion] 2022-12-12 06:36:33
>>matthb+do1
There is -of course- the famous Alan Turing paper about this [1], which is rapidly becoming more relevant by the day.

Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).

I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.

Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.

A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.

This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.

We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.

[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...

◧◩
173. WrtCdE+Hz1[view] [source] [discussion] 2022-12-12 06:45:41
>>dang+zk1
Should HN ban the discussion of mobile apps on smartphones on its platform?

The excessive use of mobile apps on smartphones has been linked to addiction and a range of negative effects on mental and physical health [0]. Should HN consider banning the use of mobile apps on smartphones on its platform in order to promote a healthier and more focused environment for discussions?

[0] : https://www.cnn.com/2019/07/01/health/cell-phone-ban-schools...

◧◩◪◨⬒⬓
187. ayewo+LA1[view] [source] [discussion] 2022-12-12 06:58:07
>>andsoi+0v1
But once you throw a VPN into the mix, it's not so simple [1] [2]. It then becomes a game of whack-a-mole where you have to obscure how pricing parity is done [3].

1: https://twitter.com/levelsio/status/1600232199243984897

2: https://twitter.com/levelsio/status/1600246753348882432

3: https://twitter.com/dannypostmaa/status/1600372062958538752

◧◩◪◨⬒⬓
188. Mistle+TA1[view] [source] [discussion] 2022-12-12 06:58:45
>>xcamba+Cz1
Ok man you are being obtuse on purpose. I’m talking about shared anecdotes from an AI about something about their life that people might find useful. If it is made up it can be as (un)useful as the bogus code ChatGPT makes sometimes that looks good and authentic but doesn’t work. The intersection of the real world and the story is what makes it useful to others on HN. We aren’t talking about writing fiction.

https://www.vice.com/en/article/wxnaem/stack-overflow-bans-c...

◧◩◪◨⬒
254. nextac+vK1[view] [source] [discussion] 2022-12-12 08:38:06
>>andsoi+wu1
There's the quality of the written commentary (which is all that matters for anyone only reading, never posting on HN) and the quality of the engagement of people that do write comments (which include how much their learned, the emotions they had, and other less tangible stuff)

I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)

◧◩
272. random+0M1[view] [source] [discussion] 2022-12-12 08:54:16
>>dang+zk1
I agree. ChatGPT has made me realise the gulf between “short form essay” school writing and the professionals.

Here’s an example article that begins with the cliched GPT-generated intro, and then switches up into crafted prose:

https://www.theatlantic.com/technology/archive/2022/12/chatg...

◧◩◪◨
303. Symbio+dQ1[view] [source] [discussion] 2022-12-12 09:30:38
>>sampo+6O1
Boten Anna ("The Bot Anna") by Basshunter, in case others don't recognise this chart-topping song about a bot :-)

https://music.youtube.com/watch?v=bpRRVS1ci40&list=RDAMVMbpR...

◧◩
309. ivegot+MQ1[view] [source] [discussion] 2022-12-12 09:35:33
>>pjmorr+J4
https://news.ycombinator.com/item?id=32447928 is marked as nearly 100% fake, whereas I can assure you it was written by a human.

Maybe I was just unlucky with the comment I tried it with (took the longest one I saw in my history), but I don't think I would have liked seeing it either removed or spat at for being considered as "AI generated"...

The detector also thinks this comment is fake. Seems influenced by flavors of mistakes.

Idiomatic ones. Spelling ones. Grammar. All non-native speakers will easily get flagged. Does not look spot-on for now. Checked all those assertion live-typing on the demo. 0.09% real.

◧◩◪
327. folbec+bT1[view] [source] [discussion] 2022-12-12 10:01:04
>>yjftsj+Qr1
Artificial Inanity ?

https://englishwotd.wordpress.com/2014/02/17/artificial-inan...

◧◩◪
334. phh+bU1[view] [source] [discussion] 2022-12-12 10:09:44
>>andy_p+iP1
FWIW, on mobile, I use https://f-droid.org/en/packages/io.github.hidroh.materialist... rather than the website, so I don't have that issue.
◧◩◪◨⬒⬓
337. execut+FU1[view] [source] [discussion] 2022-12-12 10:14:15
>>amirhi+nx1
This has already started happening at wikipedia : https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...
◧◩◪◨⬒
356. yccs27+6Z1[view] [source] [discussion] 2022-12-12 10:48:24
>>dr_dsh+HL1
How high the comment quality here usually is becomes really noticeable when it's lacking under a post. The most common offenders are political posts with outrage potential, especially while knee-jerk responses are flooding in and before measured comments have had time to rise to the top.

Recent example was https://news.ycombinator.com/item?id=33931384 about cash limits - Sooo many comments are just "Tyranny!", "EU bad!" and overall unmitigated cynicism.

◧◩◪◨
362. t0lo+HZ1[view] [source] [discussion] 2022-12-12 10:54:03
>>dotanc+hV1
There are many people who have genuine concerns about Israeli issues, such as the election of leaders who prioritise those of one faith over another, the targeted striking of Al Jazeera offices in Palestine, and the eviction of Palestinian citizens from the West Bank https://www.bbc.co.uk/news/world-middle-east-63660566. It would be false to assume that brutality of one site absolves the other of criticism for the same actions.

Israel participates in state sponsored propaganda as well. https://www.smh.com.au/technology/israeli-propaganda-war-hit...

◧◩◪◨
363. t0bia_+PZ1[view] [source] [discussion] 2022-12-12 10:55:16
>>phh+bU1
I prefer Glider https://f-droid.org/packages/nl.viter.glider/

But I still end up in Telegram because of thumbnails.

◧◩◪◨
368. devjam+B02[view] [source] [discussion] 2022-12-12 11:02:31
>>flanke+BB1
I'm using uBlacklist [1].

[1] https://iorate.github.io/ublacklist/docs

◧◩
381. JimDab+o32[view] [source] [discussion] 2022-12-12 11:30:33
>>dang+zk1
> The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter. But that's a ways off.

XKCD 810: https://xkcd.com/810/

◧◩◪
416. FartyM+4a2[view] [source] [discussion] 2022-12-12 12:24:35
>>foota+IB1
> Based on what Ive seen, I strongly believe that chatGPT responses to many questions are better than a non human expert in many cases.

I disagree - it can't even do basic logic/maths reliably. See this thread: https://news.ycombinator.com/item?id=33859482

Someone in that thread also gave an example of ChatGPT saying that 3 * pi is an irrational number while 25 * pi is a rational number... Two quotes by ChatGPT:

> when you multiply an irrational number by a rational number (in this case, 3), the result is still an irrational number.

> when you multiply a rational number by an irrational number, the result is a rational number.

426. rpgbr+Vb2[view] [source] 2022-12-12 12:42:34
>>djtrip+(OP)
Sure, but it's becoming harder to single out machine-generated content.

Shameless plug: https://notes.ghed.in/posts/2022/content-machine-revolution/

◧◩
460. dang+cf2[view] [source] [discussion] 2022-12-12 13:08:45
>>gorgoi+EI1
(We detached this subthread from its original parent, which was https://news.ycombinator.com/item?id=33950747.)
◧◩◪
491. GTP+Lk2[view] [source] [discussion] 2022-12-12 13:53:45
>>jacque+F62
On the problem of distinguishing a bot from a human, I suggest the following podcast episode from Cautionary Tales [1]. I found it both enjoyable and interesting, as it shows an interesting point of view about the matter: if we already had bots that passed as humans long ago, is because we are often bad at conversations, not necessarily because the bot is extremely good at it (and indeed in most cases it isn't).

[1] https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...

◧◩
496. EGreg+5m2[view] [source] [discussion] 2022-12-12 14:02:48
>>dang+zk1
Hi dang

I have been sounding the alarm for a while now (several years) about online bots.

Policies can’t work if you can’t enforce them. There are several issues:

1) You won’t really know whether accounts are posting bot content or not. They can be trained on existing HN text.

2) Looking for patterns such as posting “one type of comment” or “frequently posting” can be defeated by a bot which makes many styles of comments or is focused on the styles of a few popular users.

3) Swarms of bots can eke out karma here and there but collectively can amass far more karma over time. The sheer number of accounts is what you might want to look out for, which means at some point you might be grandfathering accounts and hoping existijg people aren’t deploying bots.

4) Swarms of bots can be deployed to mimic regular users and amass karma as sleepers over time (months or years) and then finally be deployed to change public opinion on HN, downvote others or perform reputational attacks to gradually oust “opponents” of an idea.

5) It’s you vs a large number of people and an endless number of bot instances trained on years of actual HN posts and data, plus myriad internet postings, and optimized for “automated helpful comments”. In other words, “mission fucking accomplished” from this xkcd is actually your worst nightmare (and that of Zuck, Musk) https://xkcd.com/810/

6) LinkedIn already has a problem of fake accounts applying for jobs, or fake jobs etc. This year we have seen the rise of profiles with totally believable deepfaked photos, copied resumes and backstories etc. https://en.m.wikipedia.org/wiki/On_the_Internet,_nobody_know...

7) At least for the next few years, you could call someone up and interview them but now all that’s left is to deepfake realtime audio / video with GPT-4 chat generation

8) Trying to catch individual accounts using a bot occasionally over the internet is like trying to catch someone using a chess or poker engine for a few moves each game.

9) Reading comments and even articles is NOT a Turing test. It is not interactive and most people simply skim the text. Even if they didn’t, the bots can pass a rudimentary Turing test applied by many people. But in fact, they don’t need to. They can do it at scale.

10) Articles are currently hosted by publications like nytimes and wall st journal and informational videos by popular youtube channels, but in the next 5-10 years you’ll see the rise of some weird no-name groups (like Vox or Vice News was once) that amasses far more shares than all human -generated content publications. Human publications might even deploy bots too. You already see MSN do it. But even if they don’t, the number of reshares is a metric that is easily optimized for, by A/B testing and bots, and has been for a decade.

But it actually gets worse:

11) Most communities — including HN - will actually prefer bots if they can’t tell who is a bot. Bots won’t cuss, will make helpful comments and add insight, and will follow the rules. The comments may be banal now but the swarm can produce a wide variation which can range from opinionated to not.

12) Given that, even private insular online communities will eventually be overrun by bots, and prefer them. First the humans will upvote bots and then the bots will upvote bots.

Human content in all communities will become vanishingly small, and what is shared will be overwhelmingly likely to be bot-generated.

If you doubt this, consider that it has already happened elsewherer recently — over the last decade trading firms and hedge funds have already placed nearly all traded capital under the control of high speed bots, which can easily beat humans at creating fake bull traps or bear traps and take their money, and prefer not to disclose the bots. You already prefer Google Maps to asking for directions. Children prefer Googling and Binging to asking your own parents. And around the world, both parents prefer working for corporations to spending time with their own children, sticking them in public schools. It’s considered self-actualization for everyone. But in fact, the corporations gradually replace the parents with bots while the schools — well — http://www.paulgraham.com/nerds.html

The bots could act well for a while and then swarms can be deployed to create unprecedented misinformation, reputational attacks (lasting for years and look organic) and nudge public consensus towards anything, real or fake, such as encouraging drastic policy changes or approve billions for some industry.

In other words … you’ll learn to love your botswarms. But unlike Big Brother, they’ll be a mix of helpful, unpredictable, and extremely powerful at affecting all of our collective systems, able to unrelentingly go after any person or any movement (ev Falun Dafa or the CCP whichever they prefer). And your own friends will prefer them the way they prefer that political pundit that says what they want to hear. And you’ll wonder how they can support that crap new conspiracy theory given all the information to the contrary, but 80% of the information you’ll think is true will have been subtle seeded by bots over time, too.

Today, we explore what 1 poker bot would do at a table of 9 people. But we are absolutely unprepared for what swarming AI will do online. It can do all this by simply adding swarming collusion capability to existing technology! Nothing more needs to even be developed!

◧◩◪◨⬒⬓⬔
500. tambou+lm2[view] [source] [discussion] 2022-12-12 14:05:18
>>jacque+oi2
I wish it was, but it’s not mine :)

https://en.m.wikipedia.org/wiki/Philosophical_zombie

That’s the thing, if we truly understand conscience, we may have a shot at verifying if it’s answerable in the abstract. By simply replicating its effects, we are dodging the question.

◧◩◪
526. B1FF_P+Qo2[view] [source] [discussion] 2022-12-12 14:21:41
>>random+0M1
> ChatGPT wrote more, but I spared you the rest because it was so boring.

Ahem.

Anyways, Searle's take has been out for a while: https://en.wikipedia.org/wiki/Chinese_room

Also, people used to look up random I-Ching or Bible verses for guidance. It's probably in the brain of the beholder.

563. throwa+jz2[view] [source] 2022-12-12 15:17:42
>>djtrip+(OP)
Discussions are supposed to go both way. The first way - I learn things - is still valid, maybe even more with the advance of AI. Even if all the contributions I just read were AI generated, I would have liked it. I guess. But the second way - I teach things - get partially destroyed if I lose time interacting with bots. Forums need to be reinvented to provide some sense of trust. I am not sure that's the end of online privacy though, we are smarter than that and we will certainly figure out systems that will ensure a human wrote it without gathering personal informations.

Somewhere else someone pointed out that using AI to reformulate our thoughts while masking our own style is a possible protection for our anonymity considering the kind of threat showed in this post: https://news.ycombinator.com/item?id=33755016 . This should seriously be taken into account.

◧◩◪◨⬒
570. rglull+CI2[view] [source] [discussion] 2022-12-12 15:57:57
>>noncom+5J1
I am trying, really! [0]

[0]: https://raphael.lullis.net/community-is-not-enough/

◧◩◪◨⬒
572. CyberD+wJ2[view] [source] [discussion] 2022-12-12 16:01:36
>>jacque+Gi2
https://news.ycombinator.com/item?id=33851993

https://news.ycombinator.com/item?id=33952506

https://news.ycombinator.com/item?id=33651960

◧◩◪
574. jodrel+MN2[view] [source] [discussion] 2022-12-12 16:18:30
>>makewo+Nr1
What inherent value does this comment https://news.ycombinator.com/item?id=33951443 have?

(It says "From the ChatGPT-generated stuff I've seen just in the last week, I think we're already there. Most humans these days are incredibly stupid.")

I have read low quality internet comments saying "people are dumb" over and over and over, year in, year out. I argue that wherever they are, they have no inherent positive value. And negative contribution to the internet, the world, the thread they are posted in.

◧◩◪
577. throw_+TR2[view] [source] [discussion] 2022-12-12 16:32:32
>>bileka+lI1
It is same idea behind https://xkcd.com/810/
594. hxuguf+Nm3[view] [source] 2022-12-12 18:53:34
>>djtrip+(OP)
Show HN: UserScript to detect GPT generated comments on Hacker News https://news.ycombinator.com/item?id=33906712
◧◩◪◨⬒⬓
602. dang+6N3[view] [source] [discussion] 2022-12-12 21:00:41
>>clay-d+gV2
If you mean about https://news.ycombinator.com/item?id=33950722, that's fine, but please email hn@ycombinator.com so we can sort it for you.
◧◩
605. dang+vR3[view] [source] [discussion] 2022-12-12 21:20:41
>>seydor+WG1
HN doesn't have a negative reaction to ChatGPT. There's a range of responses, as you'd expect from any set of millions of people. Much if not most of that range has been positive-to-overawed.

Stories with "ChatGPT" in the title have spent over 300 hours on HN's frontpage so far. Of course, everyone sees a different sampling, but if you feel that it has "barely made it to the frontpage", your sample must be quite an outlier!

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

(It's much more common for people to have seen so much of it that allergic reactions like https://news.ycombinator.com/item?id=33880024 start breaking out)

◧◩
617. martin+iB4[view] [source] [discussion] 2022-12-13 01:22:20
>>kossTK+H6
My hopes are on decentralized identity systems via web5 and some kind of PGP-like web-of-trust system.

https://developer.tbd.website/projects/web5/

◧◩◪◨⬒
628. psychp+TM7[view] [source] [discussion] 2022-12-13 20:50:38
>>random+X84
I don't think you even really believe that yourself.

https://youtu.be/7LKy3lrkTRA

632. Nicole+Nq8[view] [source] 2022-12-13 23:54:11
>>djtrip+(OP)
Really wish you would since the damn thing plagiarizes! https://justoutsourcing.blogspot.com/2022/03/gpts-plagiarism...
◧◩◪◨⬒⬓⬔⧯▣
640. max-ib+f7p[view] [source] [discussion] 2022-12-18 19:54:18
>>jacque+V72
Well, everything is math, at some level. Supreme Court decisions might be. There are software packages used to day, using some "AI", to help judges determine the adequate level of punishments looking at circumstantial factors determining recividism rates et cetera [1] [2].

I believe that in the not too distant future there will be pressure to use these "magic" AIs to be applied everywhere, and this pressure will probably not look very hard at whether the AI is good at math or not. Just look at all the pseudoscience in the criminal system [3]. I believe this poses a real problem, so keeping hareping on this is probably the right response.

[1] https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso... [2] https://www.weforum.org/agenda/2018/11/algorithms-court-crim...

[3] https://www.bostonreview.net/articles/nathan-robinson-forens...

[go to top]