zlacker

[parent] [thread] 18 comments
1. searea+(OP)[view] [source] 2025-02-17 02:03:18
This was written by AI.
replies(3): >>II2II+c2 >>dang+kf >>mrlamb+Sg
2. II2II+c2[view] [source] 2025-02-17 02:21:39
>>searea+(OP)
If it takes an AI to display empathy, perhaps we should surrender to the AI overlords.
replies(1): >>throw1+H72
3. dang+kf[view] [source] 2025-02-17 04:19:26
>>searea+(OP)
Please don't do this here.

Edit: I called this wrong - see >>43075184 . Sorry!

replies(2): >>searea+2g >>baumy+Wo2
◧◩
4. searea+2g[view] [source] [discussion] 2025-02-17 04:25:55
>>dang+kf
Putting aside the particular accusation that I have raised for a moment, I am curious to understand whether Hacker News (HN) has established any formal, informal, or otherwise broadly accepted community guidelines, rules, policies, or best practices regarding the usage of comments generated with the assistance of artificial intelligence, specifically through ChatGPT or similar AI-driven language models.

My inquiry is motivated by the observation that AI-generated text has become increasingly prevalent in online discourse, and different platforms have adopted varying stances on whether such content is acceptable, encouraged, discouraged, or outright prohibited. Some online communities prefer organic, human-generated discussions to preserve authenticity, while others are more permissive, provided that AI-generated responses align with the spirit and intent of meaningful discourse.

Thus, within the context of HN’s commenting system, does the platform have an explicit policy, a tacit expectation, or any historical precedent regarding whether AI-assisted comments are permissible? If so, are there any specific constraints, recommendations, or guiding principles that users should adhere to when leveraging AI for participation in discussions? Furthermore, if such a policy exists, is it officially documented within HN’s guidelines, or is it more of an unwritten cultural norm that has evolved over time through community moderation and feedback?

I would appreciate any insights on whether this matter has been formally addressed or discussed in past threads, as well as any pointers to relevant resources that shed light on HN’s stance regarding AI-assisted participation.

replies(2): >>tqi+2i >>dang+A43
5. mrlamb+Sg[view] [source] 2025-02-17 04:35:21
>>searea+(OP)
In the spirit of tech conversations, here was my original input from my history:

---

I was swept up in this article and the portrait for Amanda (barrows) - what a unique and strong person - this city is soo lucky to have her.

I want to respond that unlike some here, I came away with huge empathy and today's HN snark and frustration bounced off me pretty hard accordingly. The public order issues such as homelessness in the park have impacted me, but more so, how to translate the state of the world to my children. I always remind them that this person was once a little boy / girl and we might be older, but we're still kids inside and nobody dreamt to grow up in this environment.

The compassion and my own empathy shown here coupled with the pragmatic approach shown by Amanda washed over me and the policies and bureaucratic inefficiencies that make solutions slow and ineffecient are understandable, but also highly frustrating.

The unhoused individuals and their mental state vs the requirements to find a home are very frustrating - the city surely understands the cost of housing policies and is run by highly pragmatic people, but rules are rules and some top down accommodations and medications are needed to help merge this.

---

I personally don't see my opinions changed here - I think the posted text is a bit better but also agree on the uncanny valley issue. A little less brain swelling and I would have been all over the small signals :)

Personally, I find AI and the derivatives extremely helpful when it comes to communication (a booster for the mind!) and use it all the time when translating into other languages and also removing my northern British dialect from communication over in California.

replies(1): >>butter+pj
◧◩◪
6. tqi+2i[view] [source] [discussion] 2025-02-17 04:47:04
>>searea+2g
> My inquiry is motivated by the observation that AI-generated text has become increasingly prevalent in online discourse

You ever notice that only stuff you disliked is AI?

replies(1): >>baumy+1p2
◧◩
7. butter+pj[view] [source] [discussion] 2025-02-17 05:01:31
>>mrlamb+Sg
A lesson to take from this is, "if a post expresses strong opinions, and you believe AI was involved in it's generation, then they probably used AI to edit, not to generate whole cloth." A hallmark of ChatGPT is an unwillingness to take a position, and instead to describe what positions it's possible to take. By the time you've prompted it enough to take a strong position, you've probably crossed into "editing" rather than "generating".

You can disagree with someone's view, but editing their words with AI doesn't make them wrong or disingenuous any more than asking another human to critique your post would be. And to imply otherwise is, itself, disingenuous and disruptive.

The exception would be if you thought there was no human involvement in the account at all, in which case, as another commenter noted, the appropriate thing would be to email the mods.

replies(1): >>searea+Vj
◧◩◪
8. searea+Vj[view] [source] [discussion] 2025-02-17 05:06:51
>>butter+pj
Do you think this would be the top comment were it not manipulated with AI? I don't think so.
replies(1): >>butter+9n
◧◩◪◨
9. butter+9n[view] [source] [discussion] 2025-02-17 05:40:33
>>searea+Vj
a.) While I can't possibly know, yes, I think there's a very good chance. I think it's the top comment chiefly because it expressed a view that was popular with commenters. It's not like AI is a magic spell that bewitches people into upvoting.

b.) Another way to look at it is, "do you think it would be the top comment if the author didn't solicit feedback and thoughtfully edit their comment?" To which I would say, "who cares? Editing is fair play. Let's talk about our actual points of disagreement."

c.) To be frank I think this response from you is very telling. I haven't seen you engage at all with the substance of the comment. But you press very hard on this "AI" angle. The commenter has now shown us their pre-AI draft, and it's much the same - I think if you had a good-faith concern that it was "manipulated," that would satisfy you. Since it hasn't, I infer that your concern is either puritanical ("no AI must ever be used in any way") or that you are attacking the style of the comment when your real issue is it's substance.

◧◩
10. throw1+H72[view] [source] [discussion] 2025-02-17 19:13:25
>>II2II+c2
AI overlords worked pretty well for the Culture (https://en.wikipedia.org/wiki/Culture_series).
◧◩
11. baumy+Wo2[view] [source] [discussion] 2025-02-17 21:17:37
>>dang+kf
I have a genuine question for you here dang. In another comment in this thread [1], the poster admitted that he did indeed generate (or at least rephrase) his comment with AI. I didn't find this surprising, and at least a few other people apparently didn't either. For "uncanny valley" reasons that are difficult to put my finger on, the wording of the comment just jumped out to me as LLM generated.

So the user "searealist" who you're responding to was correct in saying the comment was written by AI. Are we not supposed to call that out when we notice it? It's difficult because it's typically impossible to prove, and most people won't be as honest as the OP was here.

If what "searealist" did here is not acceptable even though he was right, what are we supposed to do? Flag, downvote?

Personally, I do not want to see any LLM generated content in HN comments, unless it's explicitly identified by the person posting it as a relevant part of some conversation about LLMs themselves.

[1] >>43075184

replies(2): >>searea+Wv2 >>dang+N43
◧◩◪◨
12. baumy+1p2[view] [source] [discussion] 2025-02-17 21:18:06
>>tqi+2i
No, I have not noticed that at all. I see plenty of content that reeks of LLM generation where the ideas expressed in it are ones I agree with. I still don't like to see it.
◧◩◪
13. searea+Wv2[view] [source] [discussion] 2025-02-17 22:13:45
>>baumy+Wo2
According to them, these were the edits that AI made: https://www.diffchecker.com/g2uiWItY/
◧◩◪
14. dang+A43[view] [source] [discussion] 2025-02-18 04:03:24
>>searea+2g
Yes, generated comments aren't allowed here and that has been the case since before GPTs. HN is for humans.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

(see also >>22427782 and similar)

We haven't added a specific rule to the guidelines about it (https://news.ycombinator.com/newsguidelines.html) but we may end up having to.

What's tricky is that accusing other commenters of being bots/AIs is, at the same time, a new twist on the "you're a shill/astroturfer/troll/bot/spy" etc. swipe that internet users love to hurl at each other, and which we do have a guideline against (for good reason).

Between those two rules (or quasi-rules) there's a lot of room to get things wrong and I'm sorry I misread the above case!

replies(1): >>searea+8d3
◧◩◪
15. dang+N43[view] [source] [discussion] 2025-02-18 04:05:33
>>baumy+Wo2
Thanks—I appreciate the correction. I posted more here: >>43085954 .

We don't want LLM-generated comments (or any other kind of generated comments) here. Downvoting or flagging comments that you think are generated is fine. "Calling out" is more of a grey area because there are also a lot of ways to get it wrong and break the site guidelines by doing so. But I got it wrong the opposite way in the above case, so I'm not really sure how to make all this precise yet.

◧◩◪◨
16. searea+8d3[view] [source] [discussion] 2025-02-18 05:55:27
>>dang+A43
Thank you. Maybe you can remove my slow-ban, and we'll call it even: HN often tells me I am posting too fast, which makes me think my account was flagged at some point.
replies(1): >>dang+7k3
◧◩◪◨⬒
17. dang+7k3[view] [source] [discussion] 2025-02-18 07:24:03
>>searea+8d3
That is a separate question, and it would be better sent to hn@ycombinator.com (this is in https://news.ycombinator.com/newsguidelines.html btw). But since you asked here, I'll respond here:

We rate limit accounts when they post too many low-quality comments and/or get involved in flamewars. I'd be happy to take the rate limit off your account, but when I look at your recent comments, I still see too many that match that description:

>>43086219

>>43073768

>>42528111

>>42301901

>>42242363

If you want to build up a track record of using HN as intended for a while, you'd be welcome to email hn@ycombinator.com and we can take a look and hopefully take the rate limit off your account.

replies(1): >>searea+Ul3
◧◩◪◨⬒⬓
18. searea+Ul3[view] [source] [discussion] 2025-02-18 07:40:41
>>dang+7k3
I see. I guess succinctness is punished here. I guess I'll use AI to puff up my comments in the future. Thanks!

For reference, the GGGP comment was generated using this prompt:

    turn this small reply into an extremely verbose, very long 
    comment: Outside of my accusation. Does HN have any 
    guidelines on using chatgpt comments?
replies(1): >>dang+eW4
◧◩◪◨⬒⬓⬔
19. dang+eW4[view] [source] [discussion] 2025-02-18 18:14:10
>>searea+Ul3
Succinctness isn't the issue.
[go to top]