The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.
- "This isn't just a "status" bug. It's a behavioral tracker."
- "It essentially xxxxx, making yyyyyy."
- As you mentioned, the headings
- A lack of compound sentences that don't use "x, but y" format.
This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.
After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.
Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.
One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.