The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.
- "This isn't just a "status" bug. It's a behavioral tracker."
- "It essentially xxxxx, making yyyyyy."
- As you mentioned, the headings
- A lack of compound sentences that don't use "x, but y" format.
This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.
After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.
Certainly, public pressure is another way :)
I would be surprised if responsible Valve staff would agree that this is not something they should fix at some point.
I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.
The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.
Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.
LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.
And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?
If any of the other instances whereby HN users have quoted the guidelines or tone policed each other are allowed then calling out generated content should be allowed.
It's constructive to do so because there is obvious and constant pressure to normalize the use of LLM generated content on this forum as there is everywhere else in our society. For all its faults and to its credit Hacker News is and should remain a place where human beings talk to other human beings. If we don't push back against this then HN will become nothing but bots posting and talking to other bots.
[0]>>45077654
This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.
You point about Valve's response is valid though.
Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.
That was a typo on my side, should be "security".
>It seems fair to me that the security reporter vendor triaged this as not a security report. It feels like saying "the wedding venue kicked me out" when actually the third party bartender just cut you off.
For all intents and purposes getting your report marked as "informative" or whatever is the same as your report being rejected. To claim otherwise is just playing word games, like "it's not a bug, it's a feature". That's not to say that the OP is objectively correct that it's a security issue, but for the purposes of this argument what OP wrote (ie. 'Valve: "WontFix"' and Valve closed it as "Informative.") is approximately correct. If you contact a company to report a bug, and that company routes it to some third party support contractor (microsoft does this, I think), and the support contractor replies "not a bug, won't fix", it's fair to characterize that as "[company] rejected my bug report!", even if the person who did it was some third party contractor.
Things should be judged for their quality, and comments should try to contribute positively to the discussion.
"I suspect they're a witch" isn't constructive nor makes HN a better place.
That is not what happened, though. You can contact Valve/Steam directly. They specifically went to the third-party vendor, because the third-party vendor offers a platform to give them credit and pay them for finding security exploits. It is not the responsibility of the third-party vendor to manage all bug reports.
Creating a social stigma against the use of LLMs is constructive and necessary. It's no different than HN tone policing humor, because allowing humor would turn HN into Reddit.
I don't know, the wording on their site suggests hackerone is the primary place to report security issues, not "if you want to get paid use hackerone, otherwise email us directly".
>For issues with Steam or with Valve hardware products, please visit HackerOne — https://hackerone.com/valve. Our guidelines for responsible disclosure are also available through that program.
One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.
That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.
Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.
But to add a personal comment or criticism, I don't like this style of writing. If you like prompt your AI to write in a better style which is easier on the eyes (and it works) then please, go ahead.
I agree. I wasn't really trying to make a point. But yes, what I am implying is that posts that you can immediately recognize as AI are low effort posts, which are not worth my time.
But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.