But that's selective education. You don't do it for every shadow moderated comment. The trend is still that shadow moderation more often disadvantages trustful users. Will you acknowledge that harm?
Over 50% of Reddit users have a removed comment in their recent history that they likely were not told about. When shadow moderation is in play, abuse runs rampant among both mods and users. Both find more and more reasons to distrust each other.
How do you think spammers and abusers will exploit those options?
Again: HN works in general, and the historical record strongly confirms this, especially as compared with alternative platforms, Reddit included, which seems to be suffering its own failure modes presently.
A forum should not do things that elbow out trustful people.
That means, don't lie to authors about their actioned content. Forums should show authors the same view that moderators get. If a post has been removed, de-amplified, or otherwise altered in the view for other users, then the forum should indicate that to the post's author.
> How do you think spammers and abusers will exploit those options?
Spammers already get around and exploit all of Reddit's secretive measures. Mods regularly post to r/ModSupport about how users have circumvented bans. Now they're asking forums to require ID [1].
Once shadow moderation exists on a forum, spammers can then create their own popular groups that remove truthful content.
Forums that implement shadow moderation are not belling cats. They sharpen cats' claws.
The fact that some spammers overcome some countermeasures in no way demonstrates that:
- All spammers overcome all countermeasures.
- That spam wouldn't be far worse without those countermeasures.[1]
- That removing such blocks and practices would improve overall site quality.
I've long experience online (going on 40 years), I've designed content moderation systems, served in ops roles on multi-million-member social networks, and done analysis of several extant networks (Google+, Ello, and Hacker News, amongst them), as well as observed what happens, and does and doesn't work, across many others.
Your quest may be well-intentioned, but it's exceedingly poorly conceived.
________________________________
Notes:
1. This is the eternal conflict of preventive measures and demonstrating efficacy. Proving that adverse circumstances would have occurred in the absence of prophilactic action is of necessity proving a counterfactual. Absent some testing regime (and even then) there's little evidence to provide. The fire that didn't happen, the deaths that didn't occur, the thefts that weren't realised, etc. HN could publish information on total submissions and automated rejections. There's the inherent problem as well of classifying submitters. Even long-lived accounts get banned (search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...>). Content moderation isn't a comic-book superhero saga where orientation of the good guys and bad guys is obvious. (Great comment on this: <>>26619006 >).
Real life is complicated. People are shades of grey, not black or white. They change over time: "Die a hero or live long enough to become a villian." Credentials get co-opted. And for most accounts, courtesy of long-tail distributions, data are exceedingly thin: about half of all HN front-page stories come from accounts with only one submission in the Front Page archive, based on my own analysis of same. They may have a broader submission history, yes, but the same distribution applies there where many, and almost always most submissions come from people with painfully thin history on which to judge them. And that's assuming that the tools for doing said judging are developed.
You asked me for an alternative and I gave one.
You yourself have expressed concern over HN silently re-weighting topics [1].
You don't see transparent moderation as a solution to that?
> The fact that some spammers overcome some countermeasures in no way demonstrates that...
Once a spammer knows the system he can create infinite amounts of content. When a forum keeps mod actions secret, that benefits a handful of people.
We already established that secrecy elbows out trustful people, right? Or, do you dispute that? I've answered many of your questions. Please answer this one of mine.
> That removing such blocks and practices would improve overall site quality.
To clarify my own shade of grey, I do not support shadow moderation. I support transparent-to-the-author content moderation. I also support the legal right for forums to implement shadow moderation.
[1] >>36435312