zlacker

[return to "Steam "Offline" status leaks exact login timestamps (Valve: Won't Fix)"]
1. anonym+96[view] [source] 2026-01-20 23:25:15
>>xmrcat+(OP)
The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."

The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.

Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.

◧◩
2. gruez+s7[view] [source] 2026-01-20 23:32:48
>>anonym+96
>The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.

◧◩◪
3. jychan+u9[view] [source] 2026-01-20 23:46:17
>>gruez+s7
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.

- "This isn't just a "status" bug. It's a behavioral tracker."

- "It essentially xxxxx, making yyyyyy."

- As you mentioned, the headings

- A lack of compound sentences that don't use "x, but y" format.

This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.

After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.

◧◩◪◨
4. scratc+Ja[view] [source] 2026-01-20 23:55:31
>>jychan+u9
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
◧◩◪◨⬒
5. saghm+KS[view] [source] 2026-01-21 07:06:17
>>scratc+Ja
There have been a few times I've had interactions with people on other sites that have been clearly from LLMs. At least one of the times, it turned out to be a non-native English speaker who needed the help to be able to converse with me, and it turned out to be a worthwhile conversation that I don't think would have been possible otherwise. Sometimes the utility of the conversation can outweigh the awkwardness of how it's conveyed.

That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.

◧◩◪◨⬒⬓
6. scratc+Do2[view] [source] 2026-01-21 16:28:57
>>saghm+KS
I've seen that as well. I think its still valuable to point out that the text feels like LLM text, so that the person can understand how they are coming across. IMO a better solution is to use a translation tool rather than processing discussions through a general-purpose LLM.

But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.

[go to top]