zlacker

[parent] [thread] 40 comments
1. anonym+(OP)[view] [source] 2026-01-20 23:25:15
The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."

The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.

Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.

replies(5): >>gruez+j1 >>xmrcat+Z2 >>metano+v3 >>Someon+z4 >>foxgla+N6
2. gruez+j1[view] [source] 2026-01-20 23:32:48
>>anonym+(OP)
>The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.

Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.

replies(2): >>tim-kt+w1 >>jychan+l3
◧◩
3. tim-kt+w1[view] [source] [discussion] 2026-01-20 23:35:08
>>gruez+j1
It does for me too. Especially the short parts with headings, the bold sentences in their own paragraph and especially formulations like "X isn't just... it's Y".
replies(2): >>hamste+E5 >>virapt+t6
4. xmrcat+Z2[view] [source] 2026-01-20 23:43:03
>>anonym+(OP)
here you go https://i.ibb.co/39GRMySs/png.png
replies(2): >>gpm+e3 >>embedd+p3
◧◩
5. gpm+e3[view] [source] [discussion] 2026-01-20 23:45:11
>>xmrcat+Z2
Do I misunderstand that to be HackerOne staff - not Valve staff - marking it as "not a security vulnerability" - not "won't fix"?
replies(2): >>meibo+Y3 >>gruez+f5
◧◩
6. jychan+l3[view] [source] [discussion] 2026-01-20 23:46:17
>>gruez+j1
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.

- "This isn't just a "status" bug. It's a behavioral tracker."

- "It essentially xxxxx, making yyyyyy."

- As you mentioned, the headings

- A lack of compound sentences that don't use "x, but y" format.

This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.

After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.

replies(2): >>wrs+K3 >>scratc+A4
◧◩
7. embedd+p3[view] [source] [discussion] 2026-01-20 23:46:51
>>xmrcat+Z2
That sounds to me like they're acknowledging that the feature doesn't work as advertised ("may not align with user expectations"), but also that it was reported as a exploit/security vulnerability, while it's actually a privacy leak. Maybe HackerOne isn't the right channel for reporting those issues?

Certainly, public pressure is another way :)

8. metano+v3[view] [source] 2026-01-20 23:47:27
>>anonym+(OP)
Spending months dealing with folks attempting to blackmail us over ridiculous non-issues has pretty much killed any sympathy I had for bug bounty hunters.
◧◩◪
9. wrs+K3[view] [source] [discussion] 2026-01-20 23:49:10
>>jychan+l3
Just stop already with the LLM witch-hunt. Your personal LLM vibes don't equate to "obviously LLM generated".
replies(1): >>anonym+I4
◧◩◪
10. meibo+Y3[view] [source] [discussion] 2026-01-20 23:51:14
>>gpm+e3
No, you are correct, that is a HackerOne employee filtering the report before someone at Valve looks at it, a lot of companies have this set up and it's not great.

I would be surprised if responsible Valve staff would agree that this is not something they should fix at some point.

replies(1): >>virapt+e7
11. Someon+z4[view] [source] 2026-01-20 23:55:29
>>anonym+(OP)
I see a lot of these "this is LLM" comments; but they rarely add value, side track the discussion, and appear to come into direct conflict with several of HN's comment guidelines (at least my reading).

I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.

The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.

Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.

replies(2): >>anonym+m5 >>krapp+k6
◧◩◪
12. scratc+A4[view] [source] [discussion] 2026-01-20 23:55:31
>>jychan+l3
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
replies(2): >>gruez+L7 >>saghm+BM
◧◩◪◨
13. anonym+I4[view] [source] [discussion] 2026-01-20 23:56:21
>>wrs+K3
My "LLM witch-hunt" got the prompter to reveal the reply they received, which we now learn is neither from Valve nor says "Won't Fix" but rather deems it not a security exploit by HackerOne's definition. It is more important than ever before to be critical of the content you consume rather than blindly believing everything you read on the internet. Learning to detect LLM writing which represents a new, major channel of misinformation is one aspect of that.
replies(2): >>foxgla+G7 >>wrs+bi2
◧◩◪
14. gruez+f5[view] [source] [discussion] 2026-01-20 23:59:44
>>gpm+e3
You're right, but in this case I think some narrative liberty was justified, especially since Valve basically delegated triaging bug reports to HackerOne, but this relationship might not be immediately obvious to some readers. Suppose a nightclub contracts its bouncers from some security security firm. You get kicked out by the contract security guard. I think most people would think it's fair to characterize this situation as "the nightclub kicked me out" on a review or whatever.
replies(1): >>gpm+I7
◧◩
15. anonym+m5[view] [source] [discussion] 2026-01-21 00:00:29
>>Someon+z4
Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot. My experience using HN would be significantly better if these threads were killed and repeat offenders banned.
replies(2): >>gruez+B6 >>sublin+sb
◧◩◪
16. hamste+E5[view] [source] [discussion] 2026-01-21 00:02:29
>>tim-kt+w1
Imagine being a person like me who has always been expressing himself like that. Using em dash, too.

LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.

And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?

replies(2): >>saghm+sO >>tim-kt+YX
◧◩
17. krapp+k6[view] [source] [discussion] 2026-01-21 00:06:45
>>Someon+z4
LLM generated comments aren't allowed on HN[0]. Period.

If any of the other instances whereby HN users have quoted the guidelines or tone policed each other are allowed then calling out generated content should be allowed.

It's constructive to do so because there is obvious and constant pressure to normalize the use of LLM generated content on this forum as there is everywhere else in our society. For all its faults and to its credit Hacker News is and should remain a place where human beings talk to other human beings. If we don't push back against this then HN will become nothing but bots posting and talking to other bots.

[0]>>45077654

replies(1): >>Someon+Y9
◧◩◪
18. virapt+t6[view] [source] [discussion] 2026-01-21 00:08:09
>>tim-kt+w1
In other words, this website uses headings for sections, doesn't ramble, and has a single line of emphasis where you'd expect it. I wonder what style we'll have to adopt soon to avoid LLM witchhunt - live stream of consciousness ranting with transcript and typos?
replies(2): >>snowmo+uW >>tim-kt+jX
◧◩◪
19. gruez+B6[view] [source] [discussion] 2026-01-21 00:09:04
>>anonym+m5
>Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot.

This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.

20. foxgla+N6[view] [source] 2026-01-21 00:10:29
>>anonym+(OP)
Stop worrying about whether articles are written by LLM or not and judge them by their content or provenance to sources that you can justifiably trust. If you weren't doing that before LLMs then you were getting fooled by humans writing incompetent or deceptive articles too. People have good reasons for using LLMs to write for them. If they wrote it themselves, it might cause you to judge them as being a teenager, uneducated, foreign, or whatever other unreliable proxies you use for trust.

You point about Valve's response is valid though.

◧◩◪◨
21. virapt+e7[view] [source] [discussion] 2026-01-21 00:12:25
>>meibo+Y3
It's still on Valve though. They chose to delegate this and H1 basically becomes their voice here. I wish it was made more clear, but I don't think it's wrong.
◧◩◪◨⬒
22. foxgla+G7[view] [source] [discussion] 2026-01-21 00:15:48
>>anonym+I4
Do you have any evidence that your witch hunt caused him to show that? It could have simply been your pointing out that Valve's response wasn't shown in the article. No witch-hunts needed.
◧◩◪◨
23. gpm+I7[view] [source] [discussion] 2026-01-21 00:16:11
>>gruez+f5
It doesn't look to me like Valve delegated triaging bug reports though, rather triaging security reports. It seems fair to me that the security reporter vendor triaged this as not a security report. It feels like saying "the wedding venue kicked me out" when actually the third party bartender just cut you off.
replies(1): >>gruez+P9
◧◩◪◨
24. gruez+L7[view] [source] [discussion] 2026-01-21 00:16:15
>>scratc+A4
>What's frustrating is the author's comments here in this thread are clearly LLM text as well

Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.

replies(1): >>scratc+hl
◧◩◪◨⬒
25. gruez+P9[view] [source] [discussion] 2026-01-21 00:32:08
>>gpm+I7
>It doesn't look to me like Valve delegated triaging bug reports though, rather triaging security reports.

That was a typo on my side, should be "security".

>It seems fair to me that the security reporter vendor triaged this as not a security report. It feels like saying "the wedding venue kicked me out" when actually the third party bartender just cut you off.

For all intents and purposes getting your report marked as "informative" or whatever is the same as your report being rejected. To claim otherwise is just playing word games, like "it's not a bug, it's a feature". That's not to say that the OP is objectively correct that it's a security issue, but for the purposes of this argument what OP wrote (ie. 'Valve: "WontFix"' and Valve closed it as "Informative.") is approximately correct. If you contact a company to report a bug, and that company routes it to some third party support contractor (microsoft does this, I think), and the support contractor replies "not a bug, won't fix", it's fair to characterize that as "[company] rejected my bug report!", even if the person who did it was some third party contractor.

replies(1): >>anonym+qa
◧◩◪
26. Someon+Y9[view] [source] [discussion] 2026-01-21 00:34:01
>>krapp+k6
The problem is that people cannot prove one way or the other that things are LLM generated, so it is just a baseless witch hunt.

Things should be judged for their quality, and comments should try to contribute positively to the discussion.

"I suspect they're a witch" isn't constructive nor makes HN a better place.

replies(1): >>krapp+Sb
◧◩◪◨⬒⬓
27. anonym+qa[view] [source] [discussion] 2026-01-21 00:37:19
>>gruez+P9
> If you contact a company to report a bug, and that company routes it to some third party support contractor

That is not what happened, though. You can contact Valve/Steam directly. They specifically went to the third-party vendor, because the third-party vendor offers a platform to give them credit and pay them for finding security exploits. It is not the responsibility of the third-party vendor to manage all bug reports.

replies(1): >>gruez+Me
◧◩◪
28. sublin+sb[view] [source] [discussion] 2026-01-21 00:47:05
>>anonym+m5
The constant accusations that everything is written by bots is itself a type of abuse and misinformation.
◧◩◪◨
29. krapp+Sb[view] [source] [discussion] 2026-01-21 00:51:06
>>Someon+Y9
It isn't a baseless witch hunt if the witches are real.

Creating a social stigma against the use of LLMs is constructive and necessary. It's no different than HN tone policing humor, because allowing humor would turn HN into Reddit.

replies(1): >>Someon+tz
◧◩◪◨⬒⬓⬔
30. gruez+Me[view] [source] [discussion] 2026-01-21 01:15:16
>>anonym+qa
>They specifically went to the third-party vendor, because the third-party vendor offers a platform to give them credit and pay them for finding security exploits. It is not the responsibility of the third-party vendor to manage all bug reports.

I don't know, the wording on their site suggests hackerone is the primary place to report security issues, not "if you want to get paid use hackerone, otherwise email us directly".

>For issues with Steam or with Valve hardware products, please visit HackerOne — https://hackerone.com/valve. Our guidelines for responsible disclosure are also available through that program.

https://www.valvesoftware.com/en/security

◧◩◪◨⬒
31. scratc+hl[view] [source] [discussion] 2026-01-21 02:18:33
>>gruez+L7
Honestly, I can't point out some specific giveaway, but if you've interacted with LLMs enough you can simply tell. It's kinda like recognizing someones voice.

One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.

◧◩◪◨⬒
32. Someon+tz[view] [source] [discussion] 2026-01-21 04:51:07
>>krapp+Sb
How is randomly branding people without knowing "constructive and necessary?" Seems like it is completely self-defeating; you're going to make the accusations meaningless because if everything is "LLM" then nothing is.
replies(1): >>saghm+yL
◧◩◪◨⬒⬓
33. saghm+yL[view] [source] [discussion] 2026-01-21 06:55:03
>>Someon+tz
I get the point you're trying to make, but it's worth pointing out that the entire point is that it's not people getting branded but nebulous online entities that may or may not be people. It's a valid criticism that the accuracy of these claims is not measurable, but I think it's equally true that we no longer are in a world where we can be be sure that no content like this is from an LLM either. It's not at all obvious to me that the assumption that everything is from a human is more accurate than the aggregate set of claims of LLMs, so describing it as "branding people" seems like it's jumping to co me conclusions in the same way.
◧◩◪◨
34. saghm+BM[view] [source] [discussion] 2026-01-21 07:06:17
>>scratc+A4
There have been a few times I've had interactions with people on other sites that have been clearly from LLMs. At least one of the times, it turned out to be a non-native English speaker who needed the help to be able to converse with me, and it turned out to be a worthwhile conversation that I don't think would have been possible otherwise. Sometimes the utility of the conversation can outweigh the awkwardness of how it's conveyed.

That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.

replies(1): >>scratc+ui2
◧◩◪◨
35. saghm+sO[view] [source] [discussion] 2026-01-21 07:22:31
>>hamste+E5
The most jarring point that they mentioned, having sudden one-off boldfaced sentences in their own paragraphs, is not something I had ever seen before LLMs. It's possible that this could be a habit humans have picked up from them and started adding it the middle of other text that similarly evokes all of the other LLM tropes, but it doesn't seem particularly likely.

Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.

replies(1): >>tim-kt+sY
◧◩◪◨
36. snowmo+uW[view] [source] [discussion] 2026-01-21 08:32:12
>>virapt+t6
"In other words" means paraphrasing, not simply changing the words to something completely different.
◧◩◪◨
37. tim-kt+jX[view] [source] [discussion] 2026-01-21 08:37:38
>>virapt+t6
To me this kind of use of AI (generating the whole article) is equivalent to a low-effort post. I also personally don't like this kind of writing, regardless of whether or not an AI generated it.
◧◩◪◨
38. tim-kt+YX[view] [source] [discussion] 2026-01-21 08:42:22
>>hamste+E5
My comment was not really meant as a criticism (of AI) but more of an agreement that I am also confident in the fact that the post is AI-generated (while the parent comment does not seem to be so confident).

But to add a personal comment or criticism, I don't like this style of writing. If you like prompt your AI to write in a better style which is easier on the eyes (and it works) then please, go ahead.

◧◩◪◨⬒
39. tim-kt+sY[view] [source] [discussion] 2026-01-21 08:46:27
>>saghm+sO
> although largely because the point isn't being made precisely

I agree. I wasn't really trying to make a point. But yes, what I am implying is that posts that you can immediately recognize as AI are low effort posts, which are not worth my time.

◧◩◪◨⬒
40. wrs+bi2[view] [source] [discussion] 2026-01-21 16:27:32
>>anonym+I4
I'm not sure how you know you're correctly detecting LLM writing. My own writing has been "detected" because of "obvious" indicators like em-dashes, compound sentences, and even (remember 2024?) using the word "delve", and I assure you I'm 100% human. So the track record of people "learning to detect LLM writing" isn't great in my experience. And I don't see why I should have to change my entirely human writing style because of this.
◧◩◪◨⬒
41. scratc+ui2[view] [source] [discussion] 2026-01-21 16:28:57
>>saghm+BM
I've seen that as well. I think its still valuable to point out that the text feels like LLM text, so that the person can understand how they are coming across. IMO a better solution is to use a translation tool rather than processing discussions through a general-purpose LLM.

But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.

[go to top]