The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.
LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.
And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?
Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.