My inquiry is motivated by the observation that AI-generated text has become increasingly prevalent in online discourse, and different platforms have adopted varying stances on whether such content is acceptable, encouraged, discouraged, or outright prohibited. Some online communities prefer organic, human-generated discussions to preserve authenticity, while others are more permissive, provided that AI-generated responses align with the spirit and intent of meaningful discourse.
Thus, within the context of HN’s commenting system, does the platform have an explicit policy, a tacit expectation, or any historical precedent regarding whether AI-assisted comments are permissible? If so, are there any specific constraints, recommendations, or guiding principles that users should adhere to when leveraging AI for participation in discussions? Furthermore, if such a policy exists, is it officially documented within HN’s guidelines, or is it more of an unwritten cultural norm that has evolved over time through community moderation and feedback?
I would appreciate any insights on whether this matter has been formally addressed or discussed in past threads, as well as any pointers to relevant resources that shed light on HN’s stance regarding AI-assisted participation.
---
I was swept up in this article and the portrait for Amanda (barrows) - what a unique and strong person - this city is soo lucky to have her.
I want to respond that unlike some here, I came away with huge empathy and today's HN snark and frustration bounced off me pretty hard accordingly. The public order issues such as homelessness in the park have impacted me, but more so, how to translate the state of the world to my children. I always remind them that this person was once a little boy / girl and we might be older, but we're still kids inside and nobody dreamt to grow up in this environment.
The compassion and my own empathy shown here coupled with the pragmatic approach shown by Amanda washed over me and the policies and bureaucratic inefficiencies that make solutions slow and ineffecient are understandable, but also highly frustrating.
The unhoused individuals and their mental state vs the requirements to find a home are very frustrating - the city surely understands the cost of housing policies and is run by highly pragmatic people, but rules are rules and some top down accommodations and medications are needed to help merge this.
---
I personally don't see my opinions changed here - I think the posted text is a bit better but also agree on the uncanny valley issue. A little less brain swelling and I would have been all over the small signals :)
Personally, I find AI and the derivatives extremely helpful when it comes to communication (a booster for the mind!) and use it all the time when translating into other languages and also removing my northern British dialect from communication over in California.
You ever notice that only stuff you disliked is AI?
You can disagree with someone's view, but editing their words with AI doesn't make them wrong or disingenuous any more than asking another human to critique your post would be. And to imply otherwise is, itself, disingenuous and disruptive.
The exception would be if you thought there was no human involvement in the account at all, in which case, as another commenter noted, the appropriate thing would be to email the mods.
b.) Another way to look at it is, "do you think it would be the top comment if the author didn't solicit feedback and thoughtfully edit their comment?" To which I would say, "who cares? Editing is fair play. Let's talk about our actual points of disagreement."
c.) To be frank I think this response from you is very telling. I haven't seen you engage at all with the substance of the comment. But you press very hard on this "AI" angle. The commenter has now shown us their pre-AI draft, and it's much the same - I think if you had a good-faith concern that it was "manipulated," that would satisfy you. Since it hasn't, I infer that your concern is either puritanical ("no AI must ever be used in any way") or that you are attacking the style of the comment when your real issue is it's substance.
So the user "searealist" who you're responding to was correct in saying the comment was written by AI. Are we not supposed to call that out when we notice it? It's difficult because it's typically impossible to prove, and most people won't be as honest as the OP was here.
If what "searealist" did here is not acceptable even though he was right, what are we supposed to do? Flag, downvote?
Personally, I do not want to see any LLM generated content in HN comments, unless it's explicitly identified by the person posting it as a relevant part of some conversation about LLMs themselves.
[1] >>43075184
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(see also >>22427782 and similar)
We haven't added a specific rule to the guidelines about it (https://news.ycombinator.com/newsguidelines.html) but we may end up having to.
What's tricky is that accusing other commenters of being bots/AIs is, at the same time, a new twist on the "you're a shill/astroturfer/troll/bot/spy" etc. swipe that internet users love to hurl at each other, and which we do have a guideline against (for good reason).
Between those two rules (or quasi-rules) there's a lot of room to get things wrong and I'm sorry I misread the above case!
We don't want LLM-generated comments (or any other kind of generated comments) here. Downvoting or flagging comments that you think are generated is fine. "Calling out" is more of a grey area because there are also a lot of ways to get it wrong and break the site guidelines by doing so. But I got it wrong the opposite way in the above case, so I'm not really sure how to make all this precise yet.
We rate limit accounts when they post too many low-quality comments and/or get involved in flamewars. I'd be happy to take the rate limit off your account, but when I look at your recent comments, I still see too many that match that description:
If you want to build up a track record of using HN as intended for a while, you'd be welcome to email hn@ycombinator.com and we can take a look and hopefully take the rate limit off your account.
For reference, the GGGP comment was generated using this prompt:
turn this small reply into an extremely verbose, very long
comment: Outside of my accusation. Does HN have any
guidelines on using chatgpt comments?