zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. bileka+lI1[view] [source] 2022-12-12 08:17:36
>>dang+zk1
> Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter. But that's a ways off.

I love this response way more than I should.

◧◩◪
3. asne11+G42[view] [source] 2022-12-12 11:42:07
>>bileka+lI1
Why is that?

It's not about love or should.

Rather, we __must__ continually do better to maintain superiority. Could you imagine what would unfold if humans give that up to a logical system? At best, we offload most things to the bot, become dependent, reduce unused cognitive (and physical?) abilities. At worst, a more capable thing determines (a group of) humans are not logical. Then it would move to solve this problem as trained.

Either way, i really like the scenario where we instead harness the power of AI for solving existential problems for which we've been ill equipped (will Yellowstone erupt this year?, how could the world more effectively share resources) and getting smarter in the process.

Can we do that? I have faith :-)

◧◩◪◨
4. jacque+272[view] [source] 2022-12-12 12:01:15
>>asne11+G42
The problem is that (1) human hardware is fixed and (2) computer hardware is variable and getting better all the time and (3) computer software is variable and getting better all the time. The question then is if and when they cross over and the recent developments in this domain have me seriously worried that such cross over is inevitable. Human/AI hybrid may well be slowed down by the human bit...
◧◩◪◨⬒
5. galang+Lc2[view] [source] 2022-12-12 12:50:08
>>jacque+272
We could work on (1) right? Or as our biological component ceases to be useful to our hybrid self, we can discard it, like a baby tooth.

We thought chess or go defined humanity, turns out it is driving.

◧◩◪◨⬒⬓
6. jacque+gd2[view] [source] 2022-12-12 12:54:04
>>galang+Lc2
No thanks, for me. I'll be happy to be just biological and interact with computers through keyboards and screens.
[go to top]