zlacker

[parent] [thread] 18 comments
1. gruez+(OP)[view] [source] 2026-01-20 23:32:48
>The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.

replies(2): >>tim-kt+d >>jychan+22
2. tim-kt+d[view] [source] 2026-01-20 23:35:08
>>gruez+(OP)
It does for me too. Especially the short parts with headings, the bold sentences in their own paragraph and especially formulations like "X isn't just... it's Y".
replies(2): >>hamste+l4 >>virapt+a5
3. jychan+22[view] [source] 2026-01-20 23:46:17
>>gruez+(OP)
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.

- "This isn't just a "status" bug. It's a behavioral tracker."

- "It essentially xxxxx, making yyyyyy."

- As you mentioned, the headings

- A lack of compound sentences that don't use "x, but y" format.

This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.

After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.

replies(2): >>wrs+r2 >>scratc+h3
◧◩
4. wrs+r2[view] [source] [discussion] 2026-01-20 23:49:10
>>jychan+22
Just stop already with the LLM witch-hunt. Your personal LLM vibes don't equate to "obviously LLM generated".
replies(1): >>anonym+p3
◧◩
5. scratc+h3[view] [source] [discussion] 2026-01-20 23:55:31
>>jychan+22
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
replies(2): >>gruez+s6 >>saghm+iL
◧◩◪
6. anonym+p3[view] [source] [discussion] 2026-01-20 23:56:21
>>wrs+r2
My "LLM witch-hunt" got the prompter to reveal the reply they received, which we now learn is neither from Valve nor says "Won't Fix" but rather deems it not a security exploit by HackerOne's definition. It is more important than ever before to be critical of the content you consume rather than blindly believing everything you read on the internet. Learning to detect LLM writing which represents a new, major channel of misinformation is one aspect of that.
replies(2): >>foxgla+n6 >>wrs+Sg2
◧◩
7. hamste+l4[view] [source] [discussion] 2026-01-21 00:02:29
>>tim-kt+d
Imagine being a person like me who has always been expressing himself like that. Using em dash, too.

LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.

And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?

replies(2): >>saghm+9N >>tim-kt+FW
◧◩
8. virapt+a5[view] [source] [discussion] 2026-01-21 00:08:09
>>tim-kt+d
In other words, this website uses headings for sections, doesn't ramble, and has a single line of emphasis where you'd expect it. I wonder what style we'll have to adopt soon to avoid LLM witchhunt - live stream of consciousness ranting with transcript and typos?
replies(2): >>snowmo+bV >>tim-kt+0W
◧◩◪◨
9. foxgla+n6[view] [source] [discussion] 2026-01-21 00:15:48
>>anonym+p3
Do you have any evidence that your witch hunt caused him to show that? It could have simply been your pointing out that Valve's response wasn't shown in the article. No witch-hunts needed.
◧◩◪
10. gruez+s6[view] [source] [discussion] 2026-01-21 00:16:15
>>scratc+h3
>What's frustrating is the author's comments here in this thread are clearly LLM text as well

Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.

replies(1): >>scratc+Yj
◧◩◪◨
11. scratc+Yj[view] [source] [discussion] 2026-01-21 02:18:33
>>gruez+s6
Honestly, I can't point out some specific giveaway, but if you've interacted with LLMs enough you can simply tell. It's kinda like recognizing someones voice.

One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.

◧◩◪
12. saghm+iL[view] [source] [discussion] 2026-01-21 07:06:17
>>scratc+h3
There have been a few times I've had interactions with people on other sites that have been clearly from LLMs. At least one of the times, it turned out to be a non-native English speaker who needed the help to be able to converse with me, and it turned out to be a worthwhile conversation that I don't think would have been possible otherwise. Sometimes the utility of the conversation can outweigh the awkwardness of how it's conveyed.

That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.

replies(1): >>scratc+bh2
◧◩◪
13. saghm+9N[view] [source] [discussion] 2026-01-21 07:22:31
>>hamste+l4
The most jarring point that they mentioned, having sudden one-off boldfaced sentences in their own paragraphs, is not something I had ever seen before LLMs. It's possible that this could be a habit humans have picked up from them and started adding it the middle of other text that similarly evokes all of the other LLM tropes, but it doesn't seem particularly likely.

Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.

replies(1): >>tim-kt+9X
◧◩◪
14. snowmo+bV[view] [source] [discussion] 2026-01-21 08:32:12
>>virapt+a5
"In other words" means paraphrasing, not simply changing the words to something completely different.
◧◩◪
15. tim-kt+0W[view] [source] [discussion] 2026-01-21 08:37:38
>>virapt+a5
To me this kind of use of AI (generating the whole article) is equivalent to a low-effort post. I also personally don't like this kind of writing, regardless of whether or not an AI generated it.
◧◩◪
16. tim-kt+FW[view] [source] [discussion] 2026-01-21 08:42:22
>>hamste+l4
My comment was not really meant as a criticism (of AI) but more of an agreement that I am also confident in the fact that the post is AI-generated (while the parent comment does not seem to be so confident).

But to add a personal comment or criticism, I don't like this style of writing. If you like prompt your AI to write in a better style which is easier on the eyes (and it works) then please, go ahead.

◧◩◪◨
17. tim-kt+9X[view] [source] [discussion] 2026-01-21 08:46:27
>>saghm+9N
> although largely because the point isn't being made precisely

I agree. I wasn't really trying to make a point. But yes, what I am implying is that posts that you can immediately recognize as AI are low effort posts, which are not worth my time.

◧◩◪◨
18. wrs+Sg2[view] [source] [discussion] 2026-01-21 16:27:32
>>anonym+p3
I'm not sure how you know you're correctly detecting LLM writing. My own writing has been "detected" because of "obvious" indicators like em-dashes, compound sentences, and even (remember 2024?) using the word "delve", and I assure you I'm 100% human. So the track record of people "learning to detect LLM writing" isn't great in my experience. And I don't see why I should have to change my entirely human writing style because of this.
◧◩◪◨
19. scratc+bh2[view] [source] [discussion] 2026-01-21 16:28:57
>>saghm+iL
I've seen that as well. I think its still valuable to point out that the text feels like LLM text, so that the person can understand how they are coming across. IMO a better solution is to use a translation tool rather than processing discussions through a general-purpose LLM.

But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.

[go to top]