I don't think that makes sense. The supposed spammers can just try looking up whether their submissions show up or not when not logged in.
In fact, such secrecy benefits spammers. Good-faith users never imagine that platforms would secretly action content. So when you look at overall trends, bots, spammers and trolls are winning while genuine users are being pushed aside.
I argued that secrecy benefits trolls in a blog post, but I don't want to spam links to my posts in the comments.
Even Cory Doctorow made this case in "Como is Infosec" [1].
The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.
[1] https://doctorow.medium.com/como-is-infosec-307f87004563
I agree that publishing case (1) causes harm (spammers will just use a different domain if they know you’ve blocked theirs.) But case (2) is rather different. I don’t think the same justification for lack of transparency exists in this case. And I think shadow-banning the submission in case (2) is not very user-friendly. It would be better to just display an error, e.g. “submissions from this site are blocked because we do not believe it is suitable for HN” (or whatever). A new user might post stuff like (2) out of misunderstanding what the site is about rather than malevolence, so better to directly educate them than potentially leave them ignorant. Also, while Breitbart is rather obviously garbage, since we don’t know everything in category (2) on the list, maybe there are some sites on it whose suitability is more debatable or mixed, and its inappropriateness may be less obvious to someone than Breitbart’s (hopefully) is
Content curation is necessary, but shadow moderation is not helping. When a forum removes visible consequences, it does not prepare its users to learn from their mistakes.
I'll admit, I find HN to be more transparently moderated than Reddit and Twitter, but let's not pretend people have stopped trying to game the system. The more secret the rules (and how they are applied), the more a system serves a handful of people who have learned the secret tricks.
Meanwhile, regular users who are not platform experts trust these systems to be transparent. Trustful users spend more time innovating elsewhere, and they are all disrupted by unexpected secretive tricks.
how is that? i can understand it not being useful, but how would it help spammers?
Secret suppression is extremely common [1].
Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.
[1] https://www.removednews.com/p/hate-online-censorship-its-way...
i can't see how shadowbanning makes things worse for good-faith users. and evidently it does work against spammers here on HN (though we don't know if it is the shadow or the banning that makes it effective, but i'll believe dang when he says that it does help)
It's about whose messages are sidelined, not who gets discouraged.
With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.
Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].
> and evidently it does work against spammers here on HN
It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.
I once had the domain 'moronsinahurry' registered, though not with this group in mind...
Yes. And it's really not a close question.
"Regular users" don't have to be platform experts and learn tricks and stuff. They just post normal links and comments and never run into moderation at all.
On the contrary, secret suppression is extremely common. Every social media user has probably been moderated at some point without their knowledge.
Look up a random reddit user. Chances are they have a removed comment in their recent history, e.g. [1].
All comment removals on Reddit are shadow removals. If you use Reddit with any frequency, you'll know that mods almost never go out of their way to notify users about comment removals.
[1] https://www.reveddit.com/y/Sariel007/
archive: https://archive.is/GNudB
No research has been done about whether shadow moderation is good or bad for discourse. It was simply adopted by the entire internet because it's perceived as "easier." Indeed, for platforms and advertisers, it certainly is an easier way to control messaging. It fools good-faith users all the time. I've shared examples of that elsewhere in this thread.
[0] https://deer-run.com/users/hal/sysadmin/greet_pause.html
The internet has run on secrets for 40 years. That doesn't make it right. Now that everyone and their mother is online, it's time to consider the harms that secrets create.
Another commenter argued "Increasing cost of attacks is an effective defense strategy."
I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.
Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.
[1] >>37130143
https://www.revsys.com/tidbits/greet_pause-a-new-anti-spam-f...
I get several thousand SPAM attempts per day: I estimate that this one technique kills a large fraction of them. And look how old the feature is...
I don't consider GreetPause to be a form of shadow moderation because the sender knows the commands were rejected. The issue with shadow moderation on platforms is that the system shows you one thing while showing others something else.
Legally speaking, I have no problem with shadow moderation. I only argue it's morally wrong and bad for discourse. It discourages trust and encourages the growth of echo chambers and black-and-white thinking.
No such spam folder is provided to the public on social media.
Only if the recipient sent a false response.
If the response were misrepresented then I would object to the technique. But it doesn't sound like that's what happens.