zlacker

[parent] [thread] 8 comments
1. Someon+(OP)[view] [source] 2026-01-20 23:55:29
I see a lot of these "this is LLM" comments; but they rarely add value, side track the discussion, and appear to come into direct conflict with several of HN's comment guidelines (at least my reading).

I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.

The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.

Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.

replies(2): >>anonym+N >>krapp+L1
2. anonym+N[view] [source] 2026-01-21 00:00:29
>>Someon+(OP)
Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot. My experience using HN would be significantly better if these threads were killed and repeat offenders banned.
replies(2): >>gruez+22 >>sublin+T6
3. krapp+L1[view] [source] 2026-01-21 00:06:45
>>Someon+(OP)
LLM generated comments aren't allowed on HN[0]. Period.

If any of the other instances whereby HN users have quoted the guidelines or tone policed each other are allowed then calling out generated content should be allowed.

It's constructive to do so because there is obvious and constant pressure to normalize the use of LLM generated content on this forum as there is everywhere else in our society. For all its faults and to its credit Hacker News is and should remain a place where human beings talk to other human beings. If we don't push back against this then HN will become nothing but bots posting and talking to other bots.

[0]>>45077654

replies(1): >>Someon+p5
◧◩
4. gruez+22[view] [source] [discussion] 2026-01-21 00:09:04
>>anonym+N
>Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot.

This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.

◧◩
5. Someon+p5[view] [source] [discussion] 2026-01-21 00:34:01
>>krapp+L1
The problem is that people cannot prove one way or the other that things are LLM generated, so it is just a baseless witch hunt.

Things should be judged for their quality, and comments should try to contribute positively to the discussion.

"I suspect they're a witch" isn't constructive nor makes HN a better place.

replies(1): >>krapp+j7
◧◩
6. sublin+T6[view] [source] [discussion] 2026-01-21 00:47:05
>>anonym+N
The constant accusations that everything is written by bots is itself a type of abuse and misinformation.
◧◩◪
7. krapp+j7[view] [source] [discussion] 2026-01-21 00:51:06
>>Someon+p5
It isn't a baseless witch hunt if the witches are real.

Creating a social stigma against the use of LLMs is constructive and necessary. It's no different than HN tone policing humor, because allowing humor would turn HN into Reddit.

replies(1): >>Someon+Uu
◧◩◪◨
8. Someon+Uu[view] [source] [discussion] 2026-01-21 04:51:07
>>krapp+j7
How is randomly branding people without knowing "constructive and necessary?" Seems like it is completely self-defeating; you're going to make the accusations meaningless because if everything is "LLM" then nothing is.
replies(1): >>saghm+ZG
◧◩◪◨⬒
9. saghm+ZG[view] [source] [discussion] 2026-01-21 06:55:03
>>Someon+Uu
I get the point you're trying to make, but it's worth pointing out that the entire point is that it's not people getting branded but nebulous online entities that may or may not be people. It's a valid criticism that the accuracy of these claims is not measurable, but I think it's equally true that we no longer are in a world where we can be be sure that no content like this is from an LLM either. It's not at all obvious to me that the assumption that everything is from a human is more accurate than the aggregate set of claims of LLMs, so describing it as "branding people" seems like it's jumping to co me conclusions in the same way.
[go to top]