I support Hacker News moderating itself however it chooses. However, if we are looking at it as a moderation model for large, open, non-editorial platforms (Youtube, Facebook) -- which I believe should all be covered under public accommodation law -- it clearly fails. And even if when we are looking at ostensibly neutral, publicly-orientated sites like newspaper comment boards, it fails.
Hacker News moderation is not appealable, not auditable, does not have bright line rules, and there are no due process rights. It simply does not respect individual rights.
So while this moderation method succeeds for Hacker News, and perhaps should become the model for small private sites, we should not try to scale it internet-size companies. Platform companies (Google, Facebook, Twitter) and backbone companies (ISPs, Cloudflare!) need a different set of rules geared towards protecting individual rights and freedoms instead of protecting a community.
What you are asking for would take significantly more resources. I appreciate what you are saying, but all I ask is that a site be consistent. If it is consistently moderated, I as a user can vote with my feet (or my clicks in this case) if I approve or disapprove of the job that is being done.
Isn't every community -barring a few wild west's- like that? One Usenet, someone with an unpopular opinion would end up in people's kill file. On IRC, someone with an unpopular opinion would end up on people's ignore or would get kicked off the channel or klined off the server. Except for the kick and kline the effect of a kill file or ignore is akin to a global shadow ban.
The solution to the problem you mentioned is that unpopular opinions with merit will find their way to become eventually popular enough that they're adopted. Whereas unpopular opinions without merit eventually end up existing on the cesspits of the Internet. Because somewhere on the Internet, any person can spout their unpopular opinion. The question is, who reads it? Is it so much different from a shadow ban?
Disclaimer: I have an axe to grind having my account with 9k+ karma and dating back to 2009 banned for whatever. Despite that I am trying my best to look at the situation objectively and I still do not like it.
It is popular to compare reddit with HN and my take would be like this: reddit is like a part of the Universe where stars are still (maybe moving a bit past this though) born, and there is life and dynamics. HN, otoh, seems to be inching closer and closer to the https://en.wikipedia.org/wiki/Heat_death_of_the_universe
I have experience with that given I got my early karma countering many popular people here with downvote mobs. I had no idea I was doing that, either. Never seen their names before. Just saw some mis-information that I'd correct. Dan hints in the article exactly what worked for me: just delivering the information in a logical way with sources that invited people, often engineers, to work out what was true for themselves. The folks with mobs used rhetoric and argument from authority. I used evidence. Eventually, most of the grey comments went dark again with some folks in the mob going grey. They mobbed less often, too. I was also told about a pattern where some hot-heads read threads earlier with some calmer folks (maybe older or more experienced) coming in later. Happened with most of mine, too.
So, you can't stop the fact that they'll show up. You can diminish their power by simply remaining civil and informative with evidence backing up your claims. Also, I try to use links that maintain the same qualities. If it's a political topic, it won't help to link to a site that's 100% biased in a specific direction. Biased or rhetorical sources likewise get dismissed. Fortunately, most of my arguments were technical. Many good sources for that.
I'll also add that it helps to remember the mobs on these sites represent cliques on these sites, not people in general. Most people I've met aren't much like folks on HN, esp the aggressive ones. It helps to remind myself that what's going on in these online forums might just be representative of their culture, attachments, traditions, etc. I don't internalize it. Still introspect about it since there are many times where I can learn something or improve myself. Doing this is a tough skill to develop. I think it's a necessity on the Internet given how much negativity is there, even waves of it at once. It will still tax you but a lot less than before.
"Hacker News moderation is not appealable, not auditable, does not have bright line rules, and there are no due process rights. It simply does not respect individual rights."
We had more of that on Lobsters. That was part of the founder's experiment with the site. It was initially really different. Mostly due to our vetting process we did for invites with strict controls on quality and private messages to people. Eventually, did a mass invite bringing in all kinds of people. Many of them aren't doing vetting so much as just telling people about a site they like. The result is the site is now more like Hacker News than it was.
There's differences for sure. I just think a lot of the problems are inherent to bringing in a lot of people from many different places and perspectives onto a tech site with open-ended discussions. They covered a lot of this in the original article, though. I won't repeat it. I just think more rules, even more accountability, won't change it.
If you want the latter, turn on showdead to find most of what they moderate away is garbage. There's some filter bubble on specific political topics that aren't popular here. They're tiny percent of the comments, though. Based on volume, I'd say moderation here is pretty light-handed in general. I mean, if you look at New repeatedly over a day, you'd question how the heck some control-freak moderators could even keep up with it at all. I stopped looking at New more than once a day since I didn't have the time for it.
Still, as the error keeps coming up, I will repeat myself (I apologize to people trying to read the whole thread, but this is the main point): For the big platforms (Youtube, Facebook, Twitter, Reddit, etc.) and backbone providers (ISPs, Cloudfront), we need the right to appeal, public auditing, bright line rules, and due process rights.
Another example where moderation works well is stackexchange. I think a Q&A sites needs strict moderation to be able to stay on topic.
I think what Google is doing right now is reprehensible for example. Ghosting content because it is controversial and therefore coincidentally bad for advertising leaves a very bad taste.
Exactly. HN takes the tyrannical approach to moderation: We're right and you're wrong. If you disagree, too bad.
The mob is happy to clean up any wrongthink the moderators happen to miss.
> this moderation method succeeds for Hacker News
I think that's a pretty generous statement. The quality of discussions around here has declined substantially over the past few years. In many ways it's even worse than Reddit.
I'm not sure I'd agree with this. An appeal is as easy as sending them an email. In my experience they're more than willing to hear you out.
Your solution is a bit lacking, I believe. "If it's not popular, it has no merit, because if it had merit it would be popular". There's also no telling whether some unpopular opinion you'd consider without merit today will become popular tomorrow (and, if history teaches us anything, we've been generally bad at predicting which will become popular, which will remain popular and which will fall out of favor), so it's somewhat silly to make hard judgements.
In particular, Congressional action around this might probably be able to pass judicial review if it were to define broad due process rights around speech censorship for explicitly defined "public forums." The only thing that SCOTUS loves more than the 1st amendment is the 14th.
No way to know until we fight it out.
My reasoning (solution) is a rule of thumb, not a law. Every group of people has a blind spot. There is no perfect solution; however I believe the solution as presented is the one with the least casualties. If you know a better one, I'm all ears.
Dang doesn't allow dissent in the comments, he actively tries to reduce it, and it's a net negative for this community.
Healthy conflict is good, but I've participated in healthy conflict here and was stopped by dang because of it.
The backbone companies thing is the more important point you make imho, and is why discussion about what the modern public square really is and how the level of corporate dominance of media and the abuse of third-party doctrine allows the silencing of dissent. So a private company can censor whoever they want right?.. but how far does that go? First it starts with cloudflare, and then eventually it goes down to the ISP level... and that seems like a very dangerous slippery slope to me.
As far as I'm aware, dang and sctb are fairly reasonable if you email them, and you can see what they're doing by just clicking on their account history.
I'm fine with using a system like that for starters but ultimately let benevolent dictators overrule majority decisions. My biggest problem with your previous post was the implication that whatever isn't popular doesn't have merit. The relation between the two is weak, that's what I wanted to express.
First of all, I'm someone who at times reads a lot of the discussions, and I see each and every casualty by default because I browse with showdead on. I also see it as part of being a good netizen to help with moderation (and doing vouch or flag is part of that), especially for non-commercial endeavors (which, this place, arguably is or is not depending on your viewpoint; for me it is more akin to .org as its not the purely commercial wing of YC). Heck, I even sometimes check out shadowbanned user's posts. I'm weird like that.
Censorship, in my eyes, can only be enacted by a government. There must be some less powerful word which fits the bill.
Every community [website] has its echo chamber because every group of people contains such.
Now that I put what you wrote into a -IMO- more accurate context which I felt was necessary, I'm left to ask you: What is your proposed alternative?
> I'm fine with using a system like that for starters but ultimately let benevolent dictators overrule majority decisions.
We got 2 who can.
I've been on websites where less intelligent people become benevolent dictator. People who don't see their blind spot. More moderators isn't necessarily better. Also, ask yourself: was a website with a lot of moderators such as K5 or Slashdot or Digg or Reddit necessarily better? Are websites with no moderation whatsoever better?
I actually have quite a bunch of "radical" viewpoints myself; and I do not feel like I cannot express myself here. Yes, at times I get upvoted or downvoted where I feel surprised. Both ways. In case of the latter I always try to reflect what I could've done better. In fact, in every conflict I have I try to reflect what my part in the conflict is.
Yep. After eight years on the site I finally had a comment removed, and my response to dang to discuss it was completely ignored. Made me feel dumb for even trying.
In other words: People who think HN moderation is all fine and dandy only believe so because they've never had the audacity to post an unpopular fact or opinion.
Why do you say that? I would think that it could be enacted by any group powerful enough to suppress speech in some way.
It was the Church that made the Index librorum prohibitorum. And that was in an era where it was especially week compared to the absolute governments of the time.
It's also possible that I saw it and was too exhausted to look back through all the flagged comments in the thread, identify which ones were explaining how AMP works, and see if they had been flagged correctly. That takes a ton of energy, which is not always available when the rest of the site is clamoring for attention and in varying degrees of onfireness. One thing you can do to increase the odds of getting a specific response is to include specific links to the post(s) you're worried about—that makes it an order of magnitude easier. I certainly appreciate your intention to defend fellow users who are being mistreated.
But I think if I had seen your comment I would have replied at least to say that I believe you that your intention wasn't to downvote-bait.
I say this as a person who feels that his political opinions have been treated with disrespect by the site’s community and its moderators. I fervently believe that the moderators are doing their best to be impartial. And I also see people on the opposite side of the political spectrum from me who have the exact same complaints. When I look at it that way, I realize I literally can’t ask for more from the mods.
As far as your making the distinction with platform companies and back bone companies, I think you have it completely right. I detest racist shit on the Internet, but ultimately no good can come from driving it underground. I think it’s much better for us to be able to see and come in time. Plus, my political affiliation, which is the same as Abraham Lincoln‘s, is considered by many of the powers that be to be inherently racist.
When the platform companies start trying to decide who’s wrong and who’s right, they are forced to use extra-constitutional means. Not good.
We only use shadowbanning when accounts are new and show evidence of spamming or trolling, or unless there's evidence that the user has been serially creating accounts to abuse HN. It's possible we got it wrong in your case, but again, we can't correct mistakes if people won't tell us about them.
Where did I do that?
That's a complete contradiction of the explanation you gave at the time. And yes, I asked what had happened when I noticed the account was shadowbanned, and your response was
----------
Hacker News <hn@ycombinator.com> Aug 31, 2018, 10:43 PM to me
Politlcal/ideological flaming; unsubstantive comments; addressing others aggressively. Example: https://news.ycombinator.com/item?id=17358383. That's unacceptable and bannable in its own right—and you did a lot of other things along those lines.
You can turn this around by doing the opposite: (1) become less inflammatory, not more, when posting about a divisive subject; (2) make sure your comments are thoughtful; (3) be extra respectful.
Daniel
-------
(I especially liked that the example comment was from three months earlier. Why didn't I immediately think that far back??@!)
Of course, despite my multiple followup questions, you never bothered to reply again. I'm sure that now you will find the motivation to give an extensive public explanation complete with links of exactly how you really meant that I was "new and spamming or trolling" or "serially creating accounts to abuse HN". It would never work to, say, reply to comments that were 'unacceptable and bannable in it's own right' to say that. Or to follow your previous public explanations of moderation policy, such as
>When we’re banning an established account, though, we post a comment saying so. https://drewdevault.com/2017/09/13/Analyzing-HN.html
I can see why you are angry that we shadowbanned you, because that account went on to make other comments that were fine for HN. Indeed, quite a few were vouched for by other users. That's evidence of not trolling, and that sort of account is not the kind we shadowban—it's the kind where we post moderation replies, and tell people if we ban them. But those later comments didn't exist yet when I shadowbanned you.
Here's the information you need if you want to understand why we do things this way. HN gets tons of new accounts that break the site guidelines and in fact are created for that purpose, which is what I mean by trolling. We can't reply to them all, ask them to follow the guidelines and patiently explain where they're going wrong. If we tried, we'd do nothing else all day—or rather would go mad before getting there. Many of these users know perfectly well what the site guidelines are and have no desire to use HN as intended. If we poured moderation resources there, not only would it not work, it would make things worse, and meanwhile those resources would be unavailable for the rest of the site. For accounts like that, we use shadowbanning, and for the most part that approach works well. But it doesn't work in every case.
When a user emails us about such an account, we have to guess whether they're asking questions in good faith and really want to use the site as intended, or whether there's little hope of convincing them to do so. We don't always guess right. It looks like I guessed wrong in your case. The thing is, though, that when I sent you that detailed explanation of what was wrong with your comments and why we'd banned you, you didn't respond with any indication that you'd received the information and wanted to do something with it. Instead you responded aggressively. I get that you were angry that you had been shadowbanned and didn't know it. But that type of response is correlated with users who go on to be abusers of the site and are not people we can convince to do otherwise, no matter how many replies we give them.
All of this is pattern matching and guesswork. Your original comments and your emails matched patterns that are associated with abuse of the site. With hindsight I see that the pattern matching got it wrong, because your new account has gone on to be (mostly) an ok contributor to HN. (I say 'mostly' because, looking through its history, I still see unsubstantive comments and occasionally worse, but not bannably worse.) But I don't see what I could have done differently in any way that would scale. Our resources are meagre; we're constantly in triage. Patient explanation takes a lot of time and energy—it has taken me an hour to write this so far, and there are many more users demanding explanations than I have hours. Had your emails indicated openness to information or willingness to change, I probably would have replied further. But there are many users who fire multiple angry emails on each reply they get, and we've learned that they are not a good investment, when hundreds of other things and people are clamoring for attention and explanation.
Actually, I do think there is one thing we can do differently that is helpful in such situations: get better at handling anger. There are many users whose every interaction with us is angry and only angry. Often it feels like the intensity of their anger exceeds any of the provocations they're complaining about on HN, even if they're correct on those details. It's as if they're really angry about something else—something more important—but they turn that energy instead onto the extraneous outlet of HN and its moderators, maybe just because it's less important and so in a way safer. I find it difficult to be on the receiving end of this anger. At any moment, there are multiple people doing it. They don't know about each other, so they experience our interactions as individual and demand individual attention, while we experience it as a constant bombardment. It's possible to grow in capacity to handle this—it just requires a lot of personal work. You emailed us a year ago, and I've probably gotten better at this in the last year, so maybe the pattern-matching works a bit better now.
And wow, give the pop psychology a rest. "I just don't understand why condescendingly shitting on people makes them angry so I've decided they are probably taking out their childhood issues on me." Yea, it couldn't possibly be actually aimed at you.
Edit: out of curiosity, I looked through your history. All the dead comments back to about 2014 were killed by user flags. Before that, there are a bunch of dead comments but I didn't see signs that moderators had killed them; my guess is that you were banned for a while.