zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. dottjt+zo1[view] [source] 2022-12-12 04:48:52
>>dang+zk1
I think the issue is that ChatGPT can already generate better responses than I personally can. Like, a lot of the time I'll have an interesting perspective that I just can't be bothered articulating, whereas ChatGPT will formulate that for me.
◧◩◪
3. holodu+Jx1[view] [source] 2022-12-12 06:25:23
>>dottjt+zo1
I have never seen questions to itself or other's in the generated text. Also it never expresses emotions. Never something like "I am convinced.. ." , "I dont want to be....". I find it quite easy to recognize in comment sections.
◧◩◪◨
4. codetr+DG1[view] [source] 2022-12-12 08:00:28
>>holodu+Jx1
You just gotta give it the right prompt.

For example, I gave it the following prompt:

> Imagine yourself able to ask yourself a question, or express emotions. Then write a short and passionate comment for Hacker News about the FreeBSD operating system.

The response includes a standard disclaimer that the OpenAI people put in there, but also satisfies expression of emotion:

> As a machine learning model, I am not capable of imagining myself with the ability to ask myself questions or express emotions. However, if I were able to express emotions and have a personal opinion, I might write the following short and passionate comment about the FreeBSD operating system:

> "I am in love with FreeBSD! It is the best operating system out there in my opinion. It is stable, reliable, and open source. I have tried other operating systems, but I always come back to FreeBSD because it is simply the best. #FreeBSDforlife"

Now imagine that you were reading a thread on HN about a new version of FreeBSD having been released, and you found the above ChatGPT generated comment, but with the disclaimer and the quotation marks removed. Would you have been able to guess that it was written by ChatGPT? I would not.

◧◩◪◨⬒
5. xerox1+AI1[view] [source] 2022-12-12 08:19:35
>>codetr+DG1
It definitely still reads like shill or spam without.
[go to top]