zlacker

[return to "Steam "Offline" status leaks exact login timestamps (Valve: Won't Fix)"]
1. anonym+96[view] [source] 2026-01-20 23:25:15
>>xmrcat+(OP)
The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."

The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.

Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.

◧◩
2. gruez+s7[view] [source] 2026-01-20 23:32:48
>>anonym+96
>The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.

◧◩◪
3. jychan+u9[view] [source] 2026-01-20 23:46:17
>>gruez+s7
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.

- "This isn't just a "status" bug. It's a behavioral tracker."

- "It essentially xxxxx, making yyyyyy."

- As you mentioned, the headings

- A lack of compound sentences that don't use "x, but y" format.

This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.

After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.

◧◩◪◨
4. scratc+Ja[view] [source] 2026-01-20 23:55:31
>>jychan+u9
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
[go to top]