zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. im3w1l+7p1[view] [source] 2022-12-12 04:55:00
>>dang+zk1
> If we reach the point where the humans simply can't do better, well, then it won't matter.

I disagree with this. The exact same comment written by a human is more valuable than one written by a bot.

For example imagine I relate something that actually happened to me vs a bot making up a story. Byte for byte identical stories. They could be realistic, and have several good lessons baked in. Yet one is more valuable, because it is true.

◧◩◪
3. xcamba+mw1[view] [source] 2022-12-12 06:14:34
>>im3w1l+7p1
From the perspective of the receiver of the message, there is no such thing as the story being true or not.

If it's byte for byte the same story and I don't know whether the author is a human or a bot and I believe the story, the same reaction will be triggered at every level. The emotions, the symbolics, the empathy, all the same, whether the author is this or that.

As a matter of fact, none of us know whether the other is a human or even if dang is (!), because it is orthogonal to the contents and discussion.

What is it that you don't like? That the story is made up or that it is made up (possibly) by a not? In the first case, what is your opinion on made up stories by humans such as novels? In the second case, what is your opinion on objects made up by robots such as your car or phone?

Unless I can tell you are of flesh and bones or not, my acceptance of your story depends only on the story itself. Not whether it happened to a human or not.

◧◩◪◨
4. roenxi+dE1[view] [source] 2022-12-12 07:35:27
>>xcamba+mw1
The fact that the nature of the story teller mattering more than the nature of the story is a bias. One of the more compelling robot-takeover scenarios is they turn out to be much better at making decisions because a machine can be programmed to weight strong evidence more strongly than an emotionally compelling story.

It is visible even in this thread. im3w1l cares about the teller of the story because that is the medium to relate to another human's experience. Which is fine, but that is probably part of the decision making process. And that is a terrible way to make decisions when good alternatives (like poverty statistics, crime statistics, measures of economic success, measures of health & wellbeing) exist.

A fake story out of a chatbot which leads to people making good decisions is more valuable than the typical punter's well-told life experiences. People wouldn't like that though.

[go to top]