The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.
The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.
Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.
This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.