Basically, if it improves thread quality, I'm for it, and if it degrades thread quality, we should throw the book at it. The nice thing about this position is that comment quality is a function of the comments themselves, and little else.
#810 Constructive: https://xkcd.com/810/
In someways this thread sounds like the real first step in the raise of true AI, in a weird banal encroachment kind of way.
There’s a tension between thread quality on the one hand and the process of humans debating and learning from each other on the other hand.
I routinely use AI to help me communicate. Like Aaron to my Moses.
Basically I think those two things are synonymous.
The AIs aren't going to take over by force, it'll be because they're just nicer to deal with than real humans. Before long, we'll let AIs govern us, because the leaders we choose for ourselves (e.g. Trump) are so awful that it'll be easier to compromise on an AI.
Before long, we'll all be happy to line up to get installed into Matrix pods.
Alan Turing's paper was quite forward thinking. At the time, most people did not yet consider men and women to be equal (let alone homosexuals).
I don't think it is so important whether a comment is written by a man, a woman, a child, or a <machine>, or some combination thereof. What is important is that the comment stands on its own, and has merit.
Pseudonyms(accounts) do have a role to play here. On HN, an account can accrue reputation based on whether their past comments were good or bad. This can help rapidly filter out certain kinds of edge cases and/or bad actors.
A Minimum Required Change to policy might be: Accounts who regularly make false/incorrect comments may need to be downvoted/banned (more) aggressively, where previously we simply assumed they were making mistakes in good faith.
This is not to catch out bots per-se, but rather to deal directly with new failure modes that they introduce. This particular approach also happens to be more powerful: it immediately deals with meatpuppets and other ancillary downsides.
We're currently having a bit of a revolution in AI going on. And we might come up with better ideas over time too. Possibly we need to revisit our position and adjust every 6 months; or even every 3 months.
[1] https://academic.oup.com/mind/article/LIX/236/433/986238?log...
These days of course we use such things as IRC clients, Discord, Web Browsers etc, instead of teletypes. If you substitute in these modern technologies, the Imitation Game still applies to much online interaction today.
I've often applied the lessons gleaned from this to my own online interactions with other people. I don't think I ever quite imagined it might start applying directly to <machines>!
For me the "purpose" of discussion on HN is to fill a dopamine addiction niche that I've closed off by blocking reddit, twitter, and youtube, and, to hone ideas I have against a more-educated-than-normal and partially misaligned-against-my-values audience (I love when the pot gets stirred with stuff we aren't supposed to talk about that much such as politics and political philosophy, though I try not to be the first one to stir), and occasionally to ask a question that I'd like answered or just see what other people think about something.
Do you think there's much "learning from eachother" on HN? I'm skeptical that really happens much on the chat-internet outside of huge knowledge-swaps happening on stackoverflow. I typically see confident value statements: "that's why xyz sucks," "that's not how that works," "it wasn't xyz, it was zyx," etc. Are we all doing the "say something wrong on the internet to get more answers" thing to eachother? What's the purpose of discussion on HN to you? Why are you here?
The purpose of my comment is I wanna see what other people think about my reasons for posting, whether others share it, maybe some thoughts on that weird dopamine hit some of us get from posting at eachother, and see why others are here.
On the contrary. It's precisely when people aren't willing to learn, or to debate respectfully and with an open mind, when thread quality deteriorates.
I would take rude, well-intentioned jerks to kindly speaking devils seeking to deceive me. Have a good one in your pod, though
If the purpose for you is to get a dopamine hit and not true interest (exaggerating here) it might tune you out from the matter at hand.
For me it is the aspect of a more eclectic crowd, with a host of opinions, yet often still respectful that I like. Most threads give insights that are lacking in more general, less well moderated places. You get more interesting in depth opinions and knowledge sharing what makes HN great to me.
This feels wrong for some reasons. A generalized knowledge that AI can express may be useful. But if it makes things up convincingly, the result that someone may follow its line of thought may be worse for them? With all shit humans say, it’s their real human experience formulated through a prism of their mood, intelligence and other states and characteristics. It’s a reflection of a real world somewhere. AI statements in this sense are minced realities cooked into something that may only look like a solid one. Maybe for some communities it would be irrelevant because participants are expected to judge logically and to check all facts, but it would require to keep awareness at all times.
By “real human” I don’t mean that they are better (or worse) in a discussion, only that I am a human too, a real experience is applicable to me in principle and I could meet it irl. AI’s experience applicability has yet to be proven, if it makes sense at all.
I have no suggestion or solution, I'm just trying to wrap my head around those possibilities.
I think HN is optimizing for the former quality aspects and not the latter. So in that sense, if you can't tell if it's written by a bot, does it matter? (cue Westworld https://www.youtube.com/watch?v=kaahx4hMxmw)
An example of the latter: Since March 2020, there have been many, many discussions on HN about work-from-home versus work-at-office. I myself started working from home at the same time, and articles about working from home started to appear in the media around then, too. But my own experience was a sample of one, and many of the media articles seemed to be based on samples not much larger. It was thus difficult judge which most people preferred, what the effects on overall productivity, family life, and mental health might be, how employers might respond when the pandemic cooled down, etc. The discussions on HN revealed better and more quickly what the range of experiences with WFH was, which types of people preferred it and which types didn’t, the possible advantages and disadvantages from the point of view of employers, etc.
In contrast, discussions that focus only on general principles—freedom of this versus freedom of that, foo rights versus bar obligations, crypto flim versus fiat flam—yield less of interest, at least to me.
That’s my personal experience and/or anecdote.
Yes, absolutely yes. We use a tool because it "does things better"; we consult the Intelligent because "it is a better input"; we strive towards AGI "to get a better insight".
> supervised
We are all inside an interaction of reciprocal learning, Ofrzeta :)
So as far as the spectrum of things moderation needs to deal with goes, AI contribution to discussions doesn't seem to be the worst of problems, and it doesn't seem like it would be completely unmanageable.
But while AI may not be an unmitigated disaster, you are quite correct that unsupervised AI currently might not be an unmitigated boon yet either.
Currently if one does want to use an AI to help participate in discussions, I'd recommend one keep a very close eye on it to make sure the activity remains constructive. This seems like common courtesy and common sense at this time. (And accounts who act unwisely should be sanctioned.)
Dare I venture back to 4chan and see how my detoxxed brain sees it now...
How is this different than folks getting convinced by "media" people that mass shootings didn't happen, that 9/11 was an inside job or similar?
The value of a community is in the unpredictability and HN has a good percentage of that, and I can choose to ignore the threads that will be predictable (though it can be fun to read them sometimes).
But in general I agree on its predictability.
I am ultimately motivated to read this site to read smart things and something interesting. It is quite inefficient though. This comment is great but most comments are not what I am looking for.
If you could spend your time talking to Von Neumann about computing the input from thousands of random people who know far less than Von Neumann would not be interesting at all.
Not that it’s true. Cause I’d know if I was a bot… unless I was programmed not to notice ;-)