zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. dang+zk1[view] [source] 2022-12-12 04:07:29
>>djtrip+(OP)
They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either!

Edit: It's a bit hard to point to past explanations since the word "bots" appears in many contexts, but I did find these:

>>33911426 (Dec 2022)

>>32571890 (Aug 2022)

>>27558392 (June 2021)

>>26693590 (April 2021)

>>24189762 (Aug 2020)

>>22744611 (April 2020)

>>22427782 (Feb 2020)

>>21774797 (Dec 2019)

>>19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error.

The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter*. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never comes up.

* Edit: er, I put that too hastily! I just mean it will be a different problem at that point.

◧◩
2. jacque+F62[view] [source] 2022-12-12 11:57:58
>>dang+zk1
> But that's a ways off.

Given the jumps in output quality between '1', '2' and '3' that may not be as far off as I would like it to be.

It reminds me of the progression of computer chess. From 'nice toy' to 'beats the worlds best human' since 1949 to 'Man vs Machine World Team Championships' in 2004 is 55 years, but from Sargon (1978) to Deep Blue (1997) is only 21 years. For years we thought there was something unique about Chess (and Go for that matter) that made the game at the core a human domain thing, but those that were following this more closely saw that the progression would eventually lead to a point where the bulk of the players could no longer win from programs running on off the shelf hardware.

GPT-3 is at a point where you could probably place it's output somewhere on the scale of human intellect depending on the quality of the prompt engineering and the subject matter. Sometimes it produces utter garbage but already often enough it produces stuff that isn't all that far off from what a human might plausibly write. The fact that we are having this discussion is proof of that, given a few more years and iterations 4, 5 and 6 the relevant question is whether we are months, years or decades away from that point.

The kind of impact that this will have on labor markets the world over is seriously underestimated, and even though GPT-3's authors have side-stepped a thorny issue by simply not feeding it information on current affairs in the training corpus if Chess development is any guide the fact that you need a huge computer to train the model today is likely going to be moot at some point, when anybody can train their own LLM. Then the weaponization of this tech will begin for real.

◧◩◪
3. mdp202+2h2[view] [source] 2022-12-12 13:23:33
>>jacque+F62
> on the scale of human intellect

Where is the module that produces approximations to true and subtle insights about matters? Where is the "critical thinking" plugin, how is it vetted?

How do you value intelligence: on the form, or on the content? Take two Authors: how do you decide which one is more intelligent?

> the progression of computer chess

?! Those are solvers superseded by different, more effective solvers with a specific goal... These products in context supersede "Eliza"!

◧◩◪◨
4. jacque+Gi2[view] [source] 2022-12-12 13:38:02
>>mdp202+2h2
Well, for starters we could take your comment and compare it to GPT-3 output to see which one makes more sense.
◧◩◪◨⬒
5. mdp202+Gk2[view] [source] 2022-12-12 13:53:15
>>jacque+Gi2
> compare

Exactly. Which one "/seems/ to make sense" and which one has the "juice".

Also: are you insinuating anything? Do you believe your post is appropriate?

Edit: but very clearly you misunderstood my post: not only as you suggest with your (very avoidable) expression, but also in fact. Because my point implied that "a good intellectual proposal should not happen by chance": modules should be implemented for it. Even if S (for Simplicius) said something doubtful - which is found copiously even in our already "selected" pages -, and engine E constructed something which /reports/ some insight, that would be chancey, random, irrelevant - not the way we are supposed to build things.

◧◩◪◨⬒⬓
6. krageo+qm2[view] [source] 2022-12-12 14:05:41
>>mdp202+Gk2
I genuinely cannot tell what you are talking about.
◧◩◪◨⬒⬓⬔
7. mdp202+3o2[view] [source] 2022-12-12 14:16:54
>>krageo+qm2
No problem, let us try and explain.

Intelligence is a process in which "you have thought over a problem at length" (this is also our good old Einstein, paraphrased).

What is that "thinking"?

You have taken a piece of your world model (the piece which subjected to your investigation), made mental experiments on it, you have criticized, _criticized_ the possible statements ("A is B") that could be applied to it, you have arrived to some conclusions of different weight (more credible, more tentative).

For something to be Intelligent, it must follow that process. (What does it, has an implemented "module" that does it.)

Without such process, how can an engine be attributed the quality of Intelligence? It may "look" like it - which is even more dangerous. "Has it actually thought about it?" should be a doubt duly present in awareness.

About the original post (making its statements more explicit):

That "module" is meant to produce «insights» that go (at least) in the direction of «true», of returning true statements about some "reality", and/or in the direction of «subtle», as opposed to "trivial". That module implements "critical thinking" - there is no useful Intelligence without it. Intelligence is evaluated in actually solving problems: reliably providing true statements and good insights (certainly not for verosimilarity, which is instead a threat - you may be deceived). Of two Authors, one is more intelligent because its statements are truer or more insightful - in a /true/ way (and not because, as our good old J. may have been read, one "seems" to make more sense. Some of the greatest Authors have been accused of possibly not making sense - actual content is not necessarily directly accessible); «/true/ way» means that when you ask a student about Solon you judge he has understood the matter not just because he provided the right dates for events (he has read the texts), but because he can answer intelligent questions about it correctly.

◧◩◪◨⬒⬓⬔⧯
8. krageo+tr2[view] [source] 2022-12-12 14:36:08
>>mdp202+3o2
Thank you for going into it.

You make an absolute pile of assumptions here and the tl;dr appears to be that humans (or just you) are exceptional and inherently above any sort of imitation. I do not find such argumentation to be compelling, no matter how well dressed up it is.

◧◩◪◨⬒⬓⬔⧯▣
9. mdp202+ft2[view] [source] 2022-12-12 14:46:13
>>krageo+tr2
Devastatingly bad reading, Krageon: I wrote that to have Intelligence in an Engine, you have to implement at least some Critical Thinking into it (and that it has to be a "good" one), and you understood that I would have claimed that "you cannot implement it" - again, after having insisted that "you have to build it explicitly" (or at least you have to build something that in the end happens to do it)?!

You have to build it and you have to build that.

The assumption there is that you cannot call something Intelligent without it having Critical Thinking (and other things - Ontology building etc). If you disagree, provide an argument for it.

And by the way: that «or just you», again, and again without real grounds, cannot be considered part of the "proudest moments" of these pages.

--

Edit:

Disambiguation: of course with "intelligence" you may mean different things. 'intelligence' just means "the ability to look inside". But "[useful] Intelligence" is that with well trained Critical Thinking (and more).

◧◩◪◨⬒⬓⬔⧯▣▦
10. krageo+Uv2[view] [source] 2022-12-12 14:59:01
>>mdp202+ft2
The reading is not bad, I am just stuck at the point of the conversation where you claim to have something figured out that is not yet figured out (the nature of consciousness, or what it means to be intelligent). There is no scientific or philosophical consensus for it, so it is my instinct to not engage too deeply with the material. After all, what is the point? No doubt it seems very consistent to you, but it does not come across as coherent to me. That doesn't make my reading "devastatingly bad", which you could reasonably say was the case if you had gotten across and indeed convinced most folks that you speak to about this. Instead, you must consider it is either the communication or the reasoning that is devastatingly bad.

All of that said, your method of response (not courteous, which can be okay) and the content of your posts (bordering on the delusional, which is absolutely not okay) are upsetting me. I will end my part of the chain here so I do not find myself in an inadvertent flame war.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. mdp202+6y2[view] [source] 2022-12-12 15:10:57
>>krageo+Uv2
> the nature of consciousness

As per my edit in the parent post, I am talking about "useful" Intelligence: that may be entirely different from consciousness. A well matured though, "thought at length", may probably be useful, while a rushed thought may probably be detrimental. I am not speaking about consciousness. I am not even speaking of "natural intelligence": I am speaking about Intelligence as a general process. That process near "How well, how deeply have you thought about it".

> my reading "devastatingly bad"

What made your reading devastatingly bad is the part in which you supposed that somebody said that "it cannot be implemented" - you have written «above any sort of imitation». I wrote that, having insisted on "modules to be implemented", you should have had the opposite idea: the constituents of Intelligence - with which I mean the parts of the process in that sort of Intelligence that "says smart things having produced them with a solid process" (not relevant to "consciousness") - should be implemented.

> delusional

Again very avoidable. If you find that something is delusional, justify your view.

> flame wars

I am just discussing, and try to show what I find evident, and reasoning. Hint: when wanting to avoid flame wars, "keep it rational".

[go to top]