zlacker

[parent] [thread] 9 comments
1. jychan+(OP)[view] [source] 2026-01-20 23:46:17
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.

- "This isn't just a "status" bug. It's a behavioral tracker."

- "It essentially xxxxx, making yyyyyy."

- As you mentioned, the headings

- A lack of compound sentences that don't use "x, but y" format.

This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.

After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.

replies(2): >>wrs+p >>scratc+f1
2. wrs+p[view] [source] 2026-01-20 23:49:10
>>jychan+(OP)
Just stop already with the LLM witch-hunt. Your personal LLM vibes don't equate to "obviously LLM generated".
replies(1): >>anonym+n1
3. scratc+f1[view] [source] 2026-01-20 23:55:31
>>jychan+(OP)
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
replies(2): >>gruez+q4 >>saghm+gJ
◧◩
4. anonym+n1[view] [source] [discussion] 2026-01-20 23:56:21
>>wrs+p
My "LLM witch-hunt" got the prompter to reveal the reply they received, which we now learn is neither from Valve nor says "Won't Fix" but rather deems it not a security exploit by HackerOne's definition. It is more important than ever before to be critical of the content you consume rather than blindly believing everything you read on the internet. Learning to detect LLM writing which represents a new, major channel of misinformation is one aspect of that.
replies(2): >>foxgla+l4 >>wrs+Qe2
◧◩◪
5. foxgla+l4[view] [source] [discussion] 2026-01-21 00:15:48
>>anonym+n1
Do you have any evidence that your witch hunt caused him to show that? It could have simply been your pointing out that Valve's response wasn't shown in the article. No witch-hunts needed.
◧◩
6. gruez+q4[view] [source] [discussion] 2026-01-21 00:16:15
>>scratc+f1
>What's frustrating is the author's comments here in this thread are clearly LLM text as well

Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.

replies(1): >>scratc+Wh
◧◩◪
7. scratc+Wh[view] [source] [discussion] 2026-01-21 02:18:33
>>gruez+q4
Honestly, I can't point out some specific giveaway, but if you've interacted with LLMs enough you can simply tell. It's kinda like recognizing someones voice.

One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.

◧◩
8. saghm+gJ[view] [source] [discussion] 2026-01-21 07:06:17
>>scratc+f1
There have been a few times I've had interactions with people on other sites that have been clearly from LLMs. At least one of the times, it turned out to be a non-native English speaker who needed the help to be able to converse with me, and it turned out to be a worthwhile conversation that I don't think would have been possible otherwise. Sometimes the utility of the conversation can outweigh the awkwardness of how it's conveyed.

That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.

replies(1): >>scratc+9f2
◧◩◪
9. wrs+Qe2[view] [source] [discussion] 2026-01-21 16:27:32
>>anonym+n1
I'm not sure how you know you're correctly detecting LLM writing. My own writing has been "detected" because of "obvious" indicators like em-dashes, compound sentences, and even (remember 2024?) using the word "delve", and I assure you I'm 100% human. So the track record of people "learning to detect LLM writing" isn't great in my experience. And I don't see why I should have to change my entirely human writing style because of this.
◧◩◪
10. scratc+9f2[view] [source] [discussion] 2026-01-21 16:28:57
>>saghm+gJ
I've seen that as well. I think its still valuable to point out that the text feels like LLM text, so that the person can understand how they are coming across. IMO a better solution is to use a translation tool rather than processing discussions through a general-purpose LLM.

But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.

[go to top]