Try submitting a URL from the following domains, and it will be automatically flagged (but you can't see it's flagged unless you log out):
- archive.is
- watcher.guru
- stacker.news
- zerohedge.com
- freebeacon.com
- thefederalist.com
- breitbart.comHacker News isn't an open-ended political site for people to post weird propaganda.
Edit: about 67k sites are banned on HN. Here's a random selection of 10 of them:
vodlockertv.com
biggboss.org
infoocode.com
newyorkpersonalinjuryattorneyblog.com
moringajuice.wordpress.com
surrogacymumbai.com
maximizedlivingdrlabrecque.com
radio.com
gossipcare.com
tecteem.comWe probably banned it for submissions because we want original sources at the top level.
Is it censorship that the rules of chess say you can't poke someone's queen off the board? We're trying to play a particular game here.
Perhaps its one of those things that are hard to define. [1] But that doesn't mean clear cases don't exist.
> Is it censorship that the rules of chess say you can't poke someone's queen off the board? We're trying to play a particular game here.
No, but it is clearly political censorship if you only apply the unwritten and secret "rules" of the game to a particular political faction. Also, banning entire domain names is definitely heavy-handed.
I don't think that makes sense. The supposed spammers can just try looking up whether their submissions show up or not when not logged in.
I remember some words that succinctly express something I often observe. To paraphrase:
> Left-wing and Right-wing are terms which make a lot of people falsely believe that they disagree with each other.
It is worth trying to find common ground with people “on the other side”.
Then why web.archive.org isn't also banned? [1] And what about things which aren't available from the original source anymore?
[1]: >>37130420
I mostly agree. I argued in an article [1] that it's only censorship if the author of the content is not told about the action taken against the content.
These days though, mods and platforms will generally argue that they're being transparent by telling you that it happens. When it happens is another story altogether that is often not shared.
[1] https://www.removednews.com/p/twitters-throttling-of-what-is...
In fact, such secrecy benefits spammers. Good-faith users never imagine that platforms would secretly action content. So when you look at overall trends, bots, spammers and trolls are winning while genuine users are being pushed aside.
I argued that secrecy benefits trolls in a blog post, but I don't want to spam links to my posts in the comments.
It’s not secret, because they’ll be provided an answer if they email the mod team.
It’s not free as in open source, because it isn’t available for anyone to download and study in full.
So, since it’s not secret, is it public, or private? Since it’s not published in full but any query of LIMIT 1 is answered, is that open, closed, or other?
Restrictions to publication don’t necessarily equate to secrecy, but the best I’ve got is “available upon request”, which isn’t quite right either. Suggestions welcome.
I can assure you that is Not the case with HN: on posting archive.is URL's, proof?
Look at my comment postings : https://news.ycombinator.com/threads?id=archo
Is it possible you have been shadow-banned for poor compliance to the [1]Guidelines & [2]FAQ's?
It's not banned in comments, but it is banned in submissions. @dang (HN's moderator) confirms that here: >>37130177
Even Cory Doctorow made this case in "Como is Infosec" [1].
The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.
[1] https://doctorow.medium.com/como-is-infosec-307f87004563
>Please submit the original source. If a post reports on something found on another site, submit the latter.
And explained on numerous occasions by dang.
That said, dailykos.com seems to be banned. Happy now?
The opposite would be to show the author of the content some indicator that it's been removed, and I would call that transparent or disclosed moderation.
Interestingly, your comment first appeared to me as "* * *" with no author [2]. I wonder if that is some kind of ban.
[1] https://www.youtube.com/watch?v=8e6BIkKBZpg
[2] https://i.imgur.com/oGnXc6W.png
edit I know you commented again but it's got that "* * *" thing again:
"This domain is not allowed on HN" as an error message upon submission.
Not exactly cherry-picked, these were from things I submitted myself and noticed that were shadow flagged.
> That said, dailykos.com seems to be banned. Happy now?
No, I'd be happy when archive.is, Federalist and the rest of the non-spammy ones are unbanned. (Also, even if "balanced" censorship was the desired goal, having a single unreliable left-wing source banned vs a ton of right-wing ones doesn't really achieve that.)
Definitely not random, in any case.
> Also, even if "balanced" censorship was the desired goal,
Nobody claimed that. You merely stated that "I don't see a single left-wing new source in there." and I offered a counter-point.
> having one left-wing source vs a ton of right-wing one doesn't achieve that
I didn't do an exhaustive search for "left-wing domains" that are banned to present you a complete list, this was attempt 1 of 1.
Following your model, I could claim that 100% of left-wing domains are banned, but I won't.
Because web.archive.org is generally used for...
... things which aren't available from the original source anymore.
While archive.is is generally used to bypass paywalls. These 2 websites have 2 very distinct missions and use-cases.
Plenty of both left- and right-wing sites are banned and/or downweighted on HN. When a site is primarily about political battle, we either ban it or downweight it. Which of the two we choose depends on how likely the site is to produce the occasional interesting article (in HN's sense of the word "interesting"). That's why The Federalist and World Workers Daily (or whatever it's called) are banned, while National Review and Jacobin are merely downweighted. Both the Guardian and Daily Beast are downweighted, btw, as are most major media sites.
If you or anyone thinks that HN moderation is unfairly ideologically biased, I'm open to the critique, but you guys need to first look at the site as it actually is, and not just look at your own pre-existing perceptions. Every data point becomes a Convincing Proof when you do the latter.
People think that when their team gets moderated, the mods are OMG obviously on the other side. The Other Side feels exactly the same way. This "they're against me" perception is the most reliable phenomenon I've observed on HN. Leftists feel it, rightists feel it, Go programmers feel it, even Rust programmers feel it. Literally the very-most-popular topic on HN at any moment is perceived by someone as Viciously Suppressed because of this perception. Stop and think about that—it's kind of amazing. Someone should write a PhD thesis.
It's basically HN, but you can earn small tips for submissions and comments.
For example, I've linked to my work, but it never occurred to me to use "Show HN".
Maybe this is no big deal? Or perhaps for new signups, it would be good to “soft force” them to read the FAQ?
Re the 'delay' setting see https://news.ycombinator.com/newsfaq.html.
I haven't dug into the logs, but most probably we saw that https://news.ycombinator.com/submitted?id=thebottomline was spamming HN and banned the sites that they were spamming.
Edit: if you (i.e. anyone) click on those links and don't see anything, it's because we killed the posts. You can turn on 'showdead' in your profile to see killed posts. (This is in the FAQ: https://news.ycombinator.com/newsfaq.html.) Just please don't forget that you turned it on, because it's basically signing up to see the worst that the internet has to offer, and sometimes people forget that they turned it on and then email us complaining about what they see on HN.
You're dang right, trying to play a particular [rigged] game here.
Of the 67k sites banned on HN I would guess that fewer than 0.1% are "news sources", left- or right- or any wing. Why would you expect them to show up in a random sample of 10?
* which it is! I've unkilled >>1236054 for the occasion.
I agree that publishing case (1) causes harm (spammers will just use a different domain if they know you’ve blocked theirs.) But case (2) is rather different. I don’t think the same justification for lack of transparency exists in this case. And I think shadow-banning the submission in case (2) is not very user-friendly. It would be better to just display an error, e.g. “submissions from this site are blocked because we do not believe it is suitable for HN” (or whatever). A new user might post stuff like (2) out of misunderstanding what the site is about rather than malevolence, so better to directly educate them than potentially leave them ignorant. Also, while Breitbart is rather obviously garbage, since we don’t know everything in category (2) on the list, maybe there are some sites on it whose suitability is more debatable or mixed, and its inappropriateness may be less obvious to someone than Breitbart’s (hopefully) is
Content curation is necessary, but shadow moderation is not helping. When a forum removes visible consequences, it does not prepare its users to learn from their mistakes.
I'll admit, I find HN to be more transparently moderated than Reddit and Twitter, but let's not pretend people have stopped trying to game the system. The more secret the rules (and how they are applied), the more a system serves a handful of people who have learned the secret tricks.
Meanwhile, regular users who are not platform experts trust these systems to be transparent. Trustful users spend more time innovating elsewhere, and they are all disrupted by unexpected secretive tricks.
how is that? i can understand it not being useful, but how would it help spammers?
Secret suppression is extremely common [1].
Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.
[1] https://www.removednews.com/p/hate-online-censorship-its-way...
Guesses it's crypto bullshit
goes to website
Yep, exactly as expected. Karma alone can mess with incentives, I cannot imagine that adding monetary incentive does anything but make it worse. Also crypto has the reverse-midas-touch from everything I've experienced first-hand or read so adding that into the mix is just another black mark.
If you're going to censor someone, you owe it to them to be honest about what you're doing to them.
i can't see how shadowbanning makes things worse for good-faith users. and evidently it does work against spammers here on HN (though we don't know if it is the shadow or the banning that makes it effective, but i'll believe dang when he says that it does help)
Out of curiosity, what's the rationale for blocking archive.is? Legal reasons I assume?
Since when has moderation actions and relevant data been made available to the lay public here? We cannot look at the site as it actually is. We either have to trust you or pound sand.
>Stop and think about that—it's kind of amazing. Someone should write a PhD thesis about it.
Just because (you think) everyone feels persecuted doesn't mean you're doing a good job keeping things level. It's a common joke to make but it's just a joke. Similarly, if both a rampant nazi, and a fierce tankie hate you, that doesn't make you a bastion of democracy. "Fairness" doesn't mean pissing off everyone equally, and that is neither a necessary or sufficient condition.
These are just minor notes, don't take them too seriously
It's about whose messages are sidelined, not who gets discouraged.
With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.
Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].
> and evidently it does work against spammers here on HN
It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.
Archive.is shouldn't ever need to be the primary site. Post a link to the original and then a comment to the archive site if there's the possibility of take down or issues with paywalls.
It is likely that people were using archive.is for trying to avoid posting the original domain and masking the content that it presented.
(Even when doing the RightThing(TM) would probably be easier...)
And, BTW, I occasionally get blocked by the mechanisms here, even though not doing anything bad, but understand that there is a trade-off.
I once had the domain 'moronsinahurry' registered, though not with this group in mind...
Yes. And it's really not a close question.
"Regular users" don't have to be platform experts and learn tricks and stuff. They just post normal links and comments and never run into moderation at all.
The one that I think makes the most clear sense is "censorship" by a state power. But you must be thinking of something different, because HN is not a state power.
For example, a recent submission (of mine):
"Luis Buñuel: The Master of Film Surrealism"
it had no discussion space because (I guess) it comes from fairobserver.com . Now, I understand that fairobserver.com may had been an hive of dubious publishing historically, but it makes little sense we cannot discuss Buñuel...
Maybe a rough discriminator (function approximator, Bayesian etc.) could try and decide (based at least on the title) whether a submission from "weak editorial board" sites seems to be material to allow posts or not.
:D
> Someone should write a PhD thesis about it
In a perspective it could be related to Multi-Agent Systems (maybe with reference also to Minsky and H. Simon), as a consequence of the narrow view of the single agent, and/or an intrinsic fault of resource optimization.
unless HN is suddenly the government what you've misnomered is moderation, not censorship. Calling censorship just exaggerates your opinion and makes you look unhinged. It's a private website not national news.
<>>498910 >
That grew fairly rapidly, it was at 38,719 by 30 Dec 2012:
<>>4984095 > (a random 50 are listed).
I suspect that overwhelmingly the list continues to reflect the characteristics of its early incarnations.
I really like this take on moderation:
"The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience."
From Nilay Patel in https://www.theverge.com/2022/10/28/23428132/elon-musk-twitt...
Re shadowbanning (i.e. banning a user without telling them), see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions. The short version is that when an account has an established history, we tell them we're banning them and why. We only shadowban when it's a spammer or a new account that we have reason to guess is a serial abuser.
The parts that don't work especially well, most particularly discussion of difficult-but-important topics (in my view) ... have also been acknowledged by its creator pg (Paul Graham) and mods (publicly, dang, though there are a few others).
In general: if you submit a story and it doesn't go well, drop a note to the moderators: hn@ycombinator.com. They typically reply within a few hours, perhaps a day or if things are busy or for complex.
You can verify that a submission did or didn't go through by checking on the link from an unauthenticated (logged-out) session.
That domain is a borderline case. Sometimes the leopard really changes its spots, i.e. a site goes from offtopic or spam to one that at least occasionally produces good-for-HN articles. In such cases we simply unban it. Other times, the general content is still so bad for HN that we have to rely on users to vouch for the occasional good submission, or to email us and get us to restore it. I can't quite tell where fairobserver.com is on this spectrum because the most recent submission (yours) is good, the previous one (from 7 months ago) is borderline, and before that it was definitely not good. But I've unbanned it now and moved it into the downweighted category, i.e. one notch less penalized.
We don't publish a moderation log for reasons I've explained over the years- if you or anyone wants to know more, see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions.
Not publishing a mod log doesn't mean that we don't want to be transparent, it means that there's a tradeoff between transparency and other concerns. Our resolution of the tradeoff is to answer questions when we get asked. That's not absolute transparency but it's not nothing. Sometimes people say "well but why should we trust that", but they would say that about a moderation log as well.
Re your second paragraph: I agree! and I don't think I've claimed otherwise. In fact, the lazy centrist argument is a pet peeve (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
It's true that the way I post about these things ("both sides hate us") gets mistaken for the obvious bad argument ("therefore we must be in the happy middle", or as Scott Thompson put it years ago, "we're the porridge that Goldilocks ate!"), but that's because the actual argument is harder to lay out and I'm not sure that anybody cares.
This is a big problem with trying to explain these things - people mean very different things by the same words, and it leads to misunderstanding.
Re archive.is - see >>37130177
As for "why archive.org and not archive.is" - that's a bit of a borderline call, but gouggoug pointed out some of it at >>37130890 . The set of articles which (a) are no longer on the web, (b) are not on archive.org, but (c) are on archive.is, isn't that big. Paywall workarounds are a different thing, because the original URLs are still on the web (albeit paywalled). For those, we want the original URL at the top level, because it's important for the domain to appear beside the title.
Otherwise, HN's rule is to "submit the original source": <https://news.ycombinator.com/newsguidelines.html>
I suppose that might be clarified as "most original or canonical", but Because Reasons HN's guidelines are written loosely and interpreted according to HN's Prime Directive: "anything that gratifies one's intellectual curiosity" <>>508153 >.
On the other hand, no single political or ideological position has a monopoly on intellectual curiosity either—so by the same principle, HN can't be moderated for political or ideological position.
It's tricky because working this way conflicts with how everyone's mind works. When people see a politically charged post X that they don't like, or when they see a politically charged post Y that they do like, but which we've moderated, it's basically irresistible to jump to the conclusion "the mods are biased". This is because what we see in the first place is conditioned by our preferences - we're more likely to notice and to put weight on things we dislike (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). People with opposite preferences notice opposite data points and therefore "see" opposite biases. It's the same mechanism either way.
In reality, we're just trying to solve an optimization problem: how can you operate a public internet forum to maximize intellectual curiosity? That's basically it. It's not so easy to solve though.
Moderation is the removal of content that objectively doesn’t belong in context, eg spam
Obviously that moderation definition is nuanced bc some could argue that Marxist ideas don’t belong in the context of a site with a foundation in startups. And indeed Marxist ideas often get flagged here
I suppose a sufficiently motivated spammer might incorporate that as a submission workflow check.
So far as I'm aware, no, and there are comments from dang and pg going back through the site history which argue strongly against distinguishing groups of profiles in any way.
The one possible exception is that YC founder's handles appear orange to one another at one point in time (pg discusses this in January 2013: <>>5025168 >). The feature was disabled for performance reasons.
Dang mentions the feature still being active as of a year ago: <>>31727636 >
I seem to recall a pg or dang discussion where showing this publicly created a social tension on the site, as in, one set of people distinguished from another.
dang discusses the (general lack of) secret superpowers here: <>>22767204 >, which reiterates what's in the FAQ:
HN gives three features to YC: job ads (see above) and startup launches get placed on the front page, and YC founder names are displayed to other YC alumni in orange.
<https://news.ycombinator.com/newsfaq.html>
Top-100 karma lands you on the leaderboard: <https://news.ycombinator.com/leaders>. That's currently 41,815+ karma. There are also no special privileges here other than occasionally being contacted by someone. (I've had inquiries about dealing with the head-trip of being on the leaderboard, and a couple of requests to boost submissions, which I forward to the moderation team).
0.02% of 10,000 is 2 - pretty small
0.02% of 1,000,000,000 is 200,000 ... kinda big :)
> Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform.
> Censorship is the abnormal activity of ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship.
Censorship is somewhat subjective, something that you might find offensive and want moderated might not be considered so by others. Therefore, Alexander further argues that the simplest mechanism that turns censorship into moderation is a switch that, when enabled, lets you see the banned content, which is exactly what HN does. Alexander further argues that there are kinds of censorship that aren't necessarily bad, by this definition, disallowing pedophiles from sharing child porn with each other is censorship, but it's something that we should still do.
[1] https://astralcodexten.substack.com/p/moderation-is-differen...
I'd run across an instance of this when the Diaspora* pod I was on (the original public node, as it happens) ceased operations. I found myself wanting to archive my own posts, and was caught in something of a dilemma:
- The Internet Archive's Wayback Machine has a highly-scriptable method for submitting sites, in the form of a URL (see below). Once you have a list of pages you want to archive, you can chunk through those using your scripting tool of choice (for me, bash, and curl or wget typically). But it doesn't capture the comments on Diaspora* discussions.... E.g., <https://web.archive.org/web/20220111031247/https://joindiasp...>
- Archive.Today does not have a mass-submission tool, and somewhat aggressively imposes CAPTCHAs at times. So the remaining option is manual submissions, though those can be run off a pre-generated list of URLs which somewhat streamlines the process. And it does capture the comments. E.g., <https://archive.is/9t61g>
So, if you are looking to archive material, Archive Today is useful, if somewhat tedious at bulk.
(Which is probably why the Internet Archive is the far more comprehensive Web archive.)
Operators of public sites should NOT have to pay that tax. So you are best are not fully aware of the actual cost, IMHO.
Congrats to HN for striking a reasonable pragmatic balance.
*I had some of the first live (non-academic) Internet connectivity in the UK, and the very very first packets were hacking attempts...
Blame the trolls that prevent us from having nice things.
I would say that it contains chiefly a political part and a cultural part. Some of the pieces in the political part can be apparently well done, informative and interesting, while some others are determined in just blurting out partisan views - arguments not included.
Incidentally: such "polarized literature" seems abundant in today's "globalized" world (where, owing to "strong differences", the sieve of acceptability can have very large gaps). It is also occasionally found here in posts on HN (one of the latest instances just a few browsed pages ago): the occasional post that just states "A is B" with no justification, no foundation for the statement, without realizing that were we interested in personal opinions there are ten billion sources available. And if we had to check them, unranked in filing, an image like Borges' La Biblioteca de Babel could appear: any opinion could be found in some point of the library.
Yes, I have (now) noticed a few contributors (some very prolific) in the Fair Observer are substantially propaganda writers.
But the cultural part, https://www.fairobserver.com/category/culture/ , seems to more consistently contain quality material, with some articles potentially especially interesting. In this area, I have probably seen more bias on some mainstream news outlets.
I think that revolution that is showing valid for journalism today includes this one magazine: the model of The Economist, of having a strong prestigious and selective editorial board (hence its traditional anonymity of the contributors), is now the exception, so you do not read the Magazine but the Journalist. The Magazine will today often publish articles from just anyone; the Reader has today the burden to select the Journalists and follow them.
--
I will write you in a few hours for the repost, thank you.
> You can verify that a submission did or didn't go through by checking on the link from an unauthenticated (logged-out) session.
Trustful users do not think to do this, and it would not be necessary if the system did not keep the mod action secret.
On the contrary, secret suppression is extremely common. Every social media user has probably been moderated at some point without their knowledge.
Look up a random reddit user. Chances are they have a removed comment in their recent history, e.g. [1].
All comment removals on Reddit are shadow removals. If you use Reddit with any frequency, you'll know that mods almost never go out of their way to notify users about comment removals.
[1] https://www.reveddit.com/y/Sariel007/
archive: https://archive.is/GNudB
How can one see the site "as it actually is" when the decisions are kept secret from submitters?
> People think that when their team gets moderated, the mods are OMG obviously on the other side. The Other Side feels exactly the same way.
This will always be a thing. But it's also true that society is more divided now than it was 20 years ago. We find ourselves unable to communicate across ideological divides and we resort to shouting or in some cases violence. Some effort must be made to improve communication, and transparency for content authors is a minimal step towards that.
No research has been done about whether shadow moderation is good or bad for discourse. It was simply adopted by the entire internet because it's perceived as "easier." Indeed, for platforms and advertisers, it certainly is an easier way to control messaging. It fools good-faith users all the time. I've shared examples of that elsewhere in this thread.
(I'll occasionally note an egregiously-behaving account that doesn't seem to have been already banned.)
Those who have been advised to do so, through the Guidelines, FAQ, comments, or moderator notes, do, to their advantage.
(I'd had a submission shadowbanned as it came from the notoriously flameworthy site LinkedIn a month or few back. I noticed this, emailed the mods, and got that post un-banned. Just to note that the process is in place, and does work.)
I've done this on multiple occasions, e.g.: <>>36191005 >
As I commented above, HN operates through indirect and oblique means. Ultimately it is is a social site managed through culture. And the way that this culture is expressed and communicated is largely through various communications --- the site FAQ and guidelines, dang's very, very, very many moderation comments. Searching for his comments with "please" is a good way to find those, though you can simply browse his comment history:
- "please" by dang: <https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...>
- dang's comment history: <https://news.ycombinator.com/threads?id=dang>
Yes, it means that people's feelings get hurt. I started off here (a dozen years ago) feeling somewhat the outsider. I've come to understand and appreciate the site. It's maintained both operation and quality for some sixteen years, which is an amazing run. If you go back through history, say, a decade ago, quality and topicality of both posts and discussions are remarkably stable: <https://news.ycombinator.com/front?day=2013-08-14>.
If you do have further concerns, raise them with dang via email: <hn@ycombinator.com> He does respond, he's quite patient, might take a day or two for a more complex issue, but it will happen.
And yes, it's slow, inefficient, and lossy. But, again as the site's history shows, it mostly just works, and changing that would be a glaring case of Chesterton's Fence: <https://hn.algolia.com/?q=chesterton%27s+fence>.
But that's selective education. You don't do it for every shadow moderated comment. The trend is still that shadow moderation more often disadvantages trustful users. Will you acknowledge that harm?
Over 50% of Reddit users have a removed comment in their recent history that they likely were not told about. When shadow moderation is in play, abuse runs rampant among both mods and users. Both find more and more reasons to distrust each other.
[0] https://deer-run.com/users/hal/sysadmin/greet_pause.html
The internet has run on secrets for 40 years. That doesn't make it right. Now that everyone and their mother is online, it's time to consider the harms that secrets create.
Another commenter argued "Increasing cost of attacks is an effective defense strategy."
I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.
Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.
[1] >>37130143
https://www.revsys.com/tidbits/greet_pause-a-new-anti-spam-f...
I get several thousand SPAM attempts per day: I estimate that this one technique kills a large fraction of them. And look how old the feature is...
I don't consider GreetPause to be a form of shadow moderation because the sender knows the commands were rejected. The issue with shadow moderation on platforms is that the system shows you one thing while showing others something else.
Legally speaking, I have no problem with shadow moderation. I only argue it's morally wrong and bad for discourse. It discourages trust and encourages the growth of echo chambers and black-and-white thinking.
No such spam folder is provided to the public on social media.
It's quite possible the reason the list isn't public is because it would give away information about what thought is allowed and what thought isn't.
Only if the recipient sent a false response.
If the response were misrepresented then I would object to the technique. But it doesn't sound like that's what happens.
This is the murkiest part to me since it's not just a binary flag.
How do you think spammers and abusers will exploit those options?
Again: HN works in general, and the historical record strongly confirms this, especially as compared with alternative platforms, Reddit included, which seems to be suffering its own failure modes presently.
The "certain way" is the experience of moderating HN. Publishing the list would help spammers know how to better circumvent it.
It was an article about Eileen O’Shaughnessy - George Orwell's wife (I suppose this could raise interest, possibly also yours).
I have seen in that text unneeded references to Orwell's most private matters - as if spying in Mr. Blair's rooms.
And this should tell us how hints ("Well, it was published there"), while valuable to have at least some tentative initial ranking, are unfortunately not useful for reliable discrimination.
A forum should not do things that elbow out trustful people.
That means, don't lie to authors about their actioned content. Forums should show authors the same view that moderators get. If a post has been removed, de-amplified, or otherwise altered in the view for other users, then the forum should indicate that to the post's author.
> How do you think spammers and abusers will exploit those options?
Spammers already get around and exploit all of Reddit's secretive measures. Mods regularly post to r/ModSupport about how users have circumvented bans. Now they're asking forums to require ID [1].
Once shadow moderation exists on a forum, spammers can then create their own popular groups that remove truthful content.
Forums that implement shadow moderation are not belling cats. They sharpen cats' claws.
The fact that some spammers overcome some countermeasures in no way demonstrates that:
- All spammers overcome all countermeasures.
- That spam wouldn't be far worse without those countermeasures.[1]
- That removing such blocks and practices would improve overall site quality.
I've long experience online (going on 40 years), I've designed content moderation systems, served in ops roles on multi-million-member social networks, and done analysis of several extant networks (Google+, Ello, and Hacker News, amongst them), as well as observed what happens, and does and doesn't work, across many others.
Your quest may be well-intentioned, but it's exceedingly poorly conceived.
________________________________
Notes:
1. This is the eternal conflict of preventive measures and demonstrating efficacy. Proving that adverse circumstances would have occurred in the absence of prophilactic action is of necessity proving a counterfactual. Absent some testing regime (and even then) there's little evidence to provide. The fire that didn't happen, the deaths that didn't occur, the thefts that weren't realised, etc. HN could publish information on total submissions and automated rejections. There's the inherent problem as well of classifying submitters. Even long-lived accounts get banned (search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...>). Content moderation isn't a comic-book superhero saga where orientation of the good guys and bad guys is obvious. (Great comment on this: <>>26619006 >).
Real life is complicated. People are shades of grey, not black or white. They change over time: "Die a hero or live long enough to become a villian." Credentials get co-opted. And for most accounts, courtesy of long-tail distributions, data are exceedingly thin: about half of all HN front-page stories come from accounts with only one submission in the Front Page archive, based on my own analysis of same. They may have a broader submission history, yes, but the same distribution applies there where many, and almost always most submissions come from people with painfully thin history on which to judge them. And that's assuming that the tools for doing said judging are developed.
You asked me for an alternative and I gave one.
You yourself have expressed concern over HN silently re-weighting topics [1].
You don't see transparent moderation as a solution to that?
> The fact that some spammers overcome some countermeasures in no way demonstrates that...
Once a spammer knows the system he can create infinite amounts of content. When a forum keeps mod actions secret, that benefits a handful of people.
We already established that secrecy elbows out trustful people, right? Or, do you dispute that? I've answered many of your questions. Please answer this one of mine.
> That removing such blocks and practices would improve overall site quality.
To clarify my own shade of grey, I do not support shadow moderation. I support transparent-to-the-author content moderation. I also support the legal right for forums to implement shadow moderation.
[1] >>36435312
For sites in this category (i.e. not banned, but downweighted) we don't distinguish between political sites, major media sites, sensational bloggy sites and so on. They're all in the same bucket.
Doctors smoke it
Nurses smoke it
Judges smoke it
Even lawyer too
https://en.wikipedia.org/wiki/Shadow_banning
> Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.
This part matches shadow banning voting and is basically the same what I wrote in my previous comment just using different words:
> partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user
And this part, which contradicts what you wrote in your last comment:
> More recently, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking.
current 'unique porn domains' = 53,644
current adware, malware, tracking, etc. = 210,425 unique domainsWhat I mean to say is: I do see the logic of downgrading links to SN, because it is not usually an original source.