- "This isn't just a "status" bug. It's a behavioral tracker."
- "It essentially xxxxx, making yyyyyy."
- As you mentioned, the headings
- A lack of compound sentences that don't use "x, but y" format.
This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.
After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.
Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.
One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.
That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.
But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.