zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. jacque+F62[view] [source] 2022-12-12 11:57:58
>>dang+zk1
> But that's a ways off.

Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.

It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.

GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.

The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.

◧◩◪
3. tambou+jf2[view] [source] 2022-12-12 13:09:59
>>jacque+F62
What I fear the most is that we‘ll keep at this “fake it till you make it” approach and skip the philosophical questions, such as what conscience really is.

We’re are probably at the verge of having a bot that reports as conscious and convinces everyone that it is so. We’ll then never know how it got there, if really did or if just pretends so well that it doesn’t matter, etc.

If feels like it’s out last chance as a culture of tackling that question. When you can pragmatically achieve something, the “how” loses a bit of its appeal. We may not completely understand fluid dynamics, but if it flys, it flys.

◧◩◪◨
4. jacque+4g2[view] [source] 2022-12-12 13:15:49
>>tambou+jf2
The answer may well be 'consciousness is the ability to fake having consciousness well enough that another conscious being can't tell the difference' (which is the essence of the Turing test). Because if you're looking for a mechanism of consciousness you'd be hard put to pinpoint it in the 8 billion or so brains at your disposal for that purpose, no matter how many of them you open up. They'll all look like so much grisly matter from a biological point of view and like a very large neural net from a computational one. But you can't say 'this is where it is located and that is how it works'. Only some vague approximations.
◧◩◪◨⬒
5. tambou+mh2[view] [source] 2022-12-12 13:25:45
>>jacque+4g2
Sure, and that’s what I’m trying to say. Is being conscience just fooling yourself and others really well or is there some new property that eventually emerges from large enough neural networks and sensory inputs? The philosophical zombie is one the most important existencial questions that we may be at the cusp of ignoring.
◧◩◪◨⬒⬓
6. jacque+oi2[view] [source] 2022-12-12 13:35:20
>>tambou+mh2
Philosophical zombie is a nice way of putting it, I used the term 'articulate idiot' but yours is much more eloquent.

I'm not sure it is an answerable question though, today or possibly even in the abstract.

[go to top]