zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. jacque+F62[view] [source] 2022-12-12 11:57:58
>>dang+zk1
> But that's a ways off.

Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.

It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.

GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.

The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.

◧◩◪
3. contra+x82[view] [source] 2022-12-12 12:11:56
>>jacque+F62
Sure it might produce convincing examples of human speech, but it fundamentally lacks an internal point of view that it can express, which places limits on how well it can argue something.

It is of course possible that it might (eventually) be convincing enough that no human can tell, which would be problematic because it would suggest human speech is indistinguishable from a knee jerk response that doesn't require that you communicate any useful information.

Things would be quite different if an AI could interpret new information and form opinions, but even if GPT could be extended to do so, right now it doesn't seem to have the capability to form opinions or ingest new information (beyond a limited short term memory that it can use to have a coherent conversation).

◧◩◪◨
4. krageo+Zl2[view] [source] 2022-12-12 14:02:35
>>contra+x82
You are arguing that a piece of software misses a metaphorical soul (something that cannot be measured but that humans uniquely have and nothing else does). That's an incredibly poor argument to make in a context where folks want interesting conversation. Religion (or religion-adjacent concepts such as this one) is a conversational nuke: It signals to anyone else that the conversation is over, as a discussion on religion cannot take forms that are fundamentally interesting. It's all opinion, shouted back and forth.

Edit: Because it is a prominent feature in the responses until now, I will clarify that there is an emphasis on "all" in "all opinion". As in, it is nothing but whatever someone believes with no foundation in anything measurable or observable.

◧◩◪◨⬒
5. soraki+6o2[view] [source] 2022-12-12 14:17:08
>>krageo+Zl2
I didn’t read it as being a religious take. They appear to be referring more to embodiment (edit: alternatively, online/continual learning) which these models do not posses. When we start persisting recurrent states beyond the current session we might be able to consider that limited embodiment. Even still the models will have no direct experience interacting with the subjects of their conservations. Its all second hand from the training data.
◧◩◪◨⬒⬓
6. krageo+wt2[view] [source] 2022-12-12 14:47:47
>>soraki+6o2
Your own experience is also second hand, so what is left is the temporal factor (you experience and learn continuously and with a small feedback loop). I do not see how it can be the case that there is some sort of cutoff where the feedback loop is fast enough that something is "truly" there. This is a nebulous argument that I do not see ending when we actually get to human-equivalent learning response times, because the box is not bounded and is fundamentally based on human exceptionalism. I will admit I may be biased because of the conversations I've had on the subject in the past.
◧◩◪◨⬒⬓⬔
7. soraki+Xz2[view] [source] 2022-12-12 15:20:35
>>krageo+wt2
Second hand may not have been the best phrasing on my part, I admit. What I mean is that the model only has textual knowledge in its dataset to infer what “basketball” means. It’s never seen/heard a game, even if through someone else’s eyes/ears. It has never held and felt a basketball. Even visual language models today only get a single photo right now. It's an open question how much that matters and if the model can convey that experience entirely through language.

There are entire bodies of literature addressing things the current generation of available LLMs are missing: online and continual learning, retrieval from short-term memory, the experience from watching all YouTube videos, etc.

I agree that human exceptionalism and vitalism are common in these discussions but we can still discuss model deficiencies from a research and application point of view without assuming a religious argument.

[go to top]