zlacker

[parent] [thread] 17 comments
1. bragr+(OP)[view] [source] 2022-12-11 18:33:17
Seems like a non problem. If it's a dull comment or just inane it'll get downvoted out of existence. If the bot actually produces something interesting about the topic, what's the problem?
replies(4): >>ALittl+w1 >>Aachen+L1 >>rtkwe+Qh1 >>wpietr+rl1
2. ALittl+w1[view] [source] 2022-12-11 18:40:50
>>bragr+(OP)
I think it's a capacity problem. Right now, the "system" (legit HN users) have the capacity to deal with dull and inane comments currently provided by dull and inane human commenters. If the dull and inane comments become automated they can arbitrarily increase their number until the system lacks the capacity to deal with them.

If there was one or ten bad comments on this thread - no problem. What if there were ten thousand bad comments?

replies(3): >>pulvin+b4 >>bragr+N5 >>SoftTa+xm1
3. Aachen+L1[view] [source] 2022-12-11 18:42:15
>>bragr+(OP)
Not to take a side, but to answer the question: imbalance. It takes less than thirty seconds for a computer to generate basically any amount of text, faster than anyone else can formulate a response, faster even than anyone can read. It could theoretically swamp a topic.

Is that realistic? No idea. I haven't made up my mind on this topic yet.

replies(1): >>cwkoss+Eo1
◧◩
4. pulvin+b4[view] [source] [discussion] 2022-12-11 18:54:29
>>ALittl+w1
But it has always been easy for simple bots to mass-produce bad comments. Nothing changes if they're still bad.

I'm concerned that too-well-written posts will be thrown out-- a race to the bottom in legibility and grammar just to make posts more human-like, even if written by bots.

◧◩
5. bragr+N5[view] [source] [discussion] 2022-12-11 19:03:05
>>ALittl+w1
>What if there were ten thousand bad comments?

The thread collapse "[–]" already exists.

6. rtkwe+Qh1[view] [source] 2022-12-12 04:27:01
>>bragr+(OP)
The biggest issue is GPT is often confidently incorrect about things but is good enough to sound confident/authoritative so if people got into the habit of using it the signal to noise ratio would degrade.
replies(2): >>romanh+uj1 >>turmer+Hp6
◧◩
7. romanh+uj1[view] [source] [discussion] 2022-12-12 04:43:23
>>rtkwe+Qh1
Doesn't sound all that different from human comments, to be honest
replies(1): >>rtkwe+Xq1
8. wpietr+rl1[view] [source] 2022-12-12 05:06:22
>>bragr+(OP)
Because this places the burden on users to sort it out. And you're ignoring a third category of comments: bad ones that are glib enough to garner upvotes.
◧◩
9. SoftTa+xm1[view] [source] [discussion] 2022-12-12 05:17:39
>>ALittl+w1
To what benefit?

HN karma is pretty worthless outside of enabling a few capabilities here, and it's rather easy to attain those thresholds with some thoughtful participation.

What would be the point of flooding HN with tens of thousands of bot comments?

replies(2): >>Troubl+3n1 >>andsoi+Wt1
◧◩◪
10. Troubl+3n1[view] [source] [discussion] 2022-12-12 05:23:31
>>SoftTa+xm1
Maybe adding a few more hay bails to your Stylometry fingerprint?
◧◩
11. cwkoss+Eo1[view] [source] [discussion] 2022-12-12 05:39:18
>>Aachen+L1
Your concern seems to imply that you have to be the last one to get a word in to win an internet argument, which I wholeheartedly disagree with. On the internet and in real life, replying a lot and quickly is more often a sign of weakness.
replies(2): >>andsoi+or1 >>Aachen+cY1
◧◩◪
12. rtkwe+Xq1[view] [source] [discussion] 2022-12-12 06:04:37
>>romanh+uj1
Human incorrect comments are at least written by people and if the person isn't actually knowledgeable about a subject it's likely to show more in the response than it does in gpt which can mimic all the right forms to appear smarter/more correct. Beyond anything else though at least when it's a person writing the wrong answer they spent the time to write it vs the copy paste it automation of using chatgpt. That little bit of effort is a speed bump well worth having since at the very least getting tons of people to spam confidently incorrect things is more work than having a dozen instances of a bot do the same a hundred times faster.
replies(1): >>js8+rE1
◧◩◪
13. andsoi+or1[view] [source] [discussion] 2022-12-12 06:10:16
>>cwkoss+Eo1
These replies won’t come from a single account, so humans will still be drowned out by bots.
◧◩◪
14. andsoi+Wt1[view] [source] [discussion] 2022-12-12 06:33:46
>>SoftTa+xm1
> What would be the point of flooding HN with tens of thousands of bot comments?

Malice.

◧◩◪◨
15. js8+rE1[view] [source] [discussion] 2022-12-12 08:24:17
>>rtkwe+Xq1
I have seen firsthand the effect of wrong comments (I have written them) being upvoted more than the correcting responses that came afterwards. So yeah, it can happen with humans too, on occasion.
replies(1): >>rtkwe+Kd2
◧◩◪
16. Aachen+cY1[view] [source] [discussion] 2022-12-12 11:24:20
>>cwkoss+Eo1
Never said that the bots would be posting the last word.

Also I don't deal in "signs of weakness" but try to look at arguments' contents instead. Which is what takes time to evaluate and argue about. Which would waste my time if I'm talking to a (semi-)automated system.

◧◩◪◨⬒
17. rtkwe+Kd2[view] [source] [discussion] 2022-12-12 13:35:14
>>js8+rE1
Sure and there's no real way around that other than community norms to downvote/whatever corrected replies and upvoting the correction. HN makes that a bit hard by locking voting away from so many people but I think it works better than open voting probably.
◧◩
18. turmer+Hp6[view] [source] [discussion] 2022-12-13 15:57:34
>>rtkwe+Qh1
imo you have described HN in a nutshell
[go to top]