Go to Twitter and click on a link going to any url on "NYTimes.com" or "threads.net" and you'll see about a ~5 second delay before t.co forwards you to the right address.
Twitter won't ban domains they don't like but will waste your time if you visit them.
I've been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it's obviously deliberate.
Edit: about 67k sites are banned on HN. Here's a random selection of 10 of them:
vodlockertv.com
biggboss.org
infoocode.com
newyorkpersonalinjuryattorneyblog.com
moringajuice.wordpress.com
surrogacymumbai.com
maximizedlivingdrlabrecque.com
radio.com
gossipcare.com
tecteem.comWe probably banned it for submissions because we want original sources at the top level.
- `time wget https://t.co/4fs609qwWt` -> `0m5.389s`
- `time curl -L https://t.co/4fs609qwWt` -> `0m1.158s`
Perhaps its one of those things that are hard to define. [1] But that doesn't mean clear cases don't exist.
> Is it censorship that the rules of chess say you can't poke someone's queen off the board? We're trying to play a particular game here.
No, but it is clearly political censorship if you only apply the unwritten and secret "rules" of the game to a particular political faction. Also, banning entire domain names is definitely heavy-handed.
> Selective downtime, where the troll finds that the website is down (or really slow) quite often. Not all of the time, because that would tip them off. Trolls are impatient by nature, so they eventually find a more reliable forum to troll.
https://ask.metafilter.com/117775/What-was-the-first-website...
Then why web.archive.org isn't also banned? [1] And what about things which aren't available from the original source anymore?
[1]: >>37130420
I mostly agree. I argued in an article [1] that it's only censorship if the author of the content is not told about the action taken against the content.
These days though, mods and platforms will generally argue that they're being transparent by telling you that it happens. When it happens is another story altogether that is often not shared.
[1] https://www.removednews.com/p/twitters-throttling-of-what-is...
I can assure you that is Not the case with HN: on posting archive.is URL's, proof?
Look at my comment postings : https://news.ycombinator.com/threads?id=archo
Is it possible you have been shadow-banned for poor compliance to the [1]Guidelines & [2]FAQ's?
It's not banned in comments, but it is banned in submissions. @dang (HN's moderator) confirms that here: >>37130177
Even Cory Doctorow made this case in "Como is Infosec" [1].
The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.
[1] https://doctorow.medium.com/como-is-infosec-307f87004563
The opposite would be to show the author of the content some indicator that it's been removed, and I would call that transparent or disclosed moderation.
Interestingly, your comment first appeared to me as "* * *" with no author [2]. I wonder if that is some kind of ban.
[1] https://www.youtube.com/watch?v=8e6BIkKBZpg
[2] https://i.imgur.com/oGnXc6W.png
edit I know you commented again but it's got that "* * *" thing again:
curl -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/117.0" -I "https://t.co/4fs609qwWt"
x-response-time: 4521Re the 'delay' setting see https://news.ycombinator.com/newsfaq.html.
I haven't dug into the logs, but most probably we saw that https://news.ycombinator.com/submitted?id=thebottomline was spamming HN and banned the sites that they were spamming.
Edit: if you (i.e. anyone) click on those links and don't see anything, it's because we killed the posts. You can turn on 'showdead' in your profile to see killed posts. (This is in the FAQ: https://news.ycombinator.com/newsfaq.html.) Just please don't forget that you turned it on, because it's basically signing up to see the worst that the internet has to offer, and sometimes people forget that they turned it on and then email us complaining about what they see on HN.
Of the 67k sites banned on HN I would guess that fewer than 0.1% are "news sources", left- or right- or any wing. Why would you expect them to show up in a random sample of 10?
* which it is! I've unkilled >>1236054 for the occasion.
Secret suppression is extremely common [1].
Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.
[1] https://www.removednews.com/p/hate-online-censorship-its-way...
- `time curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/81.0" -L https://t.co/4fs609qwW` -> 4.730 total
- `time curl -L https://t.co/4fs609qwWt` -> 1.313 total
Same request, the only difference is user-agent.
nitter.net was historically a little less reliable for me due to rate limiting, which is why I initially switched. They worked around the rate limiting issue now, so that may no longer be the case.
It's about whose messages are sidelined, not who gets discouraged.
With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.
Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].
> and evidently it does work against spammers here on HN
It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.
Archive.is shouldn't ever need to be the primary site. Post a link to the original and then a comment to the archive site if there's the possibility of take down or issues with paywalls.
It is likely that people were using archive.is for trying to avoid posting the original domain and masking the content that it presented.
man curl
-b, --cookie <data|filename>
(HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line.
----Add that option to your curl tests.
---
$ time curl -s -b -A "curl/8.2.1" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.245s
user 0m0.087s
sys 0m0.034s
---
$ time curl -s -b -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.265s
user 0m0.103s
sys 0m0.023s
---
$ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.254s
user 0m0.100s
sys 0m0.018
---<>>498910 >
That grew fairly rapidly, it was at 38,719 by 30 Dec 2012:
<>>4984095 > (a random 50 are listed).
I suspect that overwhelmingly the list continues to reflect the characteristics of its early incarnations.
I really like this take on moderation:
"The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience."
From Nilay Patel in https://www.theverge.com/2022/10/28/23428132/elon-musk-twitt...
Re shadowbanning (i.e. banning a user without telling them), see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions. The short version is that when an account has an established history, we tell them we're banning them and why. We only shadowban when it's a spammer or a new account that we have reason to guess is a serial abuser.
The parts that don't work especially well, most particularly discussion of difficult-but-important topics (in my view) ... have also been acknowledged by its creator pg (Paul Graham) and mods (publicly, dang, though there are a few others).
In general: if you submit a story and it doesn't go well, drop a note to the moderators: hn@ycombinator.com. They typically reply within a few hours, perhaps a day or if things are busy or for complex.
You can verify that a submission did or didn't go through by checking on the link from an unauthenticated (logged-out) session.
That domain is a borderline case. Sometimes the leopard really changes its spots, i.e. a site goes from offtopic or spam to one that at least occasionally produces good-for-HN articles. In such cases we simply unban it. Other times, the general content is still so bad for HN that we have to rely on users to vouch for the occasional good submission, or to email us and get us to restore it. I can't quite tell where fairobserver.com is on this spectrum because the most recent submission (yours) is good, the previous one (from 7 months ago) is borderline, and before that it was definitely not good. But I've unbanned it now and moved it into the downweighted category, i.e. one notch less penalized.
1. Open incognito window in Chrome
2. Visit https://t.co/4fs609qwWt -> 5s delay
3. Open a second tab in the same window -> no delay
4. Close window, start a new incognito session
5. Visit https://t.co/4fs609qwWt -> 5s delay returnsWe don't publish a moderation log for reasons I've explained over the years- if you or anyone wants to know more, see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions.
Not publishing a mod log doesn't mean that we don't want to be transparent, it means that there's a tradeoff between transparency and other concerns. Our resolution of the tradeoff is to answer questions when we get asked. That's not absolute transparency but it's not nothing. Sometimes people say "well but why should we trust that", but they would say that about a moderation log as well.
Re your second paragraph: I agree! and I don't think I've claimed otherwise. In fact, the lazy centrist argument is a pet peeve (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
It's true that the way I post about these things ("both sides hate us") gets mistaken for the obvious bad argument ("therefore we must be in the happy middle", or as Scott Thompson put it years ago, "we're the porridge that Goldilocks ate!"), but that's because the actual argument is harder to lay out and I'm not sure that anybody cares.
% curl -gsSIw'foo %{time_total}\n' -- https://t.co/4fs609qwWt https://t.co/iigzas6QBx | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 301
location: https://nyti.ms/453cLzc
foo 0.119295
HTTP/2 301
location: https://www.gov.uk/government/news/uk-acknowledges-acts-of-genocide- committed-by-daesh-against-yazidis
foo 0.037376Here's a simpler test I think replicates what I am indicating in GP comment, with regards to cookie handling:
Not passing a cookie to the next stage; pure GET request:
$ time curl -s -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > nocookie.html
real 0m4.916s
user 0m0.016s
sys 0m0.018s
Using `-b` to pass the cookies _(same command as above, just adding `-b`)_ $ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > withcookie.html
real 0m1.995s
user 0m0.083s
sys 0m0.026s
Look at the differences in the resulting files for 'with' and 'no' cookie. One redirect works in a timely manner. The other takes the ~4-5 seconds to redirect.Re archive.is - see >>37130177
As for "why archive.org and not archive.is" - that's a bit of a borderline call, but gouggoug pointed out some of it at >>37130890 . The set of articles which (a) are no longer on the web, (b) are not on archive.org, but (c) are on archive.is, isn't that big. Paywall workarounds are a different thing, because the original URLs are still on the web (albeit paywalled). For those, we want the original URL at the top level, because it's important for the domain to appear beside the title.
Otherwise, HN's rule is to "submit the original source": <https://news.ycombinator.com/newsguidelines.html>
I suppose that might be clarified as "most original or canonical", but Because Reasons HN's guidelines are written loosely and interpreted according to HN's Prime Directive: "anything that gratifies one's intellectual curiosity" <>>508153 >.
On the other hand, no single political or ideological position has a monopoly on intellectual curiosity either—so by the same principle, HN can't be moderated for political or ideological position.
It's tricky because working this way conflicts with how everyone's mind works. When people see a politically charged post X that they don't like, or when they see a politically charged post Y that they do like, but which we've moderated, it's basically irresistible to jump to the conclusion "the mods are biased". This is because what we see in the first place is conditioned by our preferences - we're more likely to notice and to put weight on things we dislike (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). People with opposite preferences notice opposite data points and therefore "see" opposite biases. It's the same mechanism either way.
In reality, we're just trying to solve an optimization problem: how can you operate a public internet forum to maximize intellectual curiosity? That's basically it. It's not so easy to solve though.
[Edit:] I'm still seeing it with threads.net:
curl -v -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7TiI suppose a sufficiently motivated spammer might incorporate that as a submission workflow check.
% curl -gsSIw'foo %{time_total}\n' https://t.co/DzIiCFp7Ti | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 301
location: https://www.threads.net/@chaco_mmm_room
foo 0.123137
Doesn't matter if I do a HTTP/2 HEAD or GET: % curl -gsSw'%{time_total}\n' https://t.co/DzIiCFp7Ti
0.121503
HTTP/1.1 also shows no delay: % curl -gsSw'%{time_total}\n' --http1.1 https://t.co/DzIiCFp7Ti
0.120044
I chalk this up to rot at X/twitter that is being fixed now that it was noticed.So far as I'm aware, no, and there are comments from dang and pg going back through the site history which argue strongly against distinguishing groups of profiles in any way.
The one possible exception is that YC founder's handles appear orange to one another at one point in time (pg discusses this in January 2013: <>>5025168 >). The feature was disabled for performance reasons.
Dang mentions the feature still being active as of a year ago: <>>31727636 >
I seem to recall a pg or dang discussion where showing this publicly created a social tension on the site, as in, one set of people distinguished from another.
dang discusses the (general lack of) secret superpowers here: <>>22767204 >, which reiterates what's in the FAQ:
HN gives three features to YC: job ads (see above) and startup launches get placed on the front page, and YC founder names are displayed to other YC alumni in orange.
<https://news.ycombinator.com/newsfaq.html>
Top-100 karma lands you on the leaderboard: <https://news.ycombinator.com/leaders>. That's currently 41,815+ karma. There are also no special privileges here other than occasionally being contacted by someone. (I've had inquiries about dealing with the head-trip of being on the leaderboard, and a couple of requests to boost submissions, which I forward to the moderation team).
% curl -vgsSIw'> %{time_total}\n' -b -A "curl/8.2.1" https://t.co/DzIiCFp7Ti 2>&1 | grep '^\(* WARNING: \)\|\(Could not resolve host: \)\|>'
* WARNING: failed to open cookie file "-A"
* Could not resolve host: curl
curl: (6) Could not resolve host: curl
* WARNING: failed to open cookie file "-A"
> HEAD /DzIiCFp7Ti HTTP/2
> Host: t.co
> User-Agent: curl/8.1.2
> Accept: */*
>
> 0.013309
> 0.112494> Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform.
> Censorship is the abnormal activity of ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship.
Censorship is somewhat subjective, something that you might find offensive and want moderated might not be considered so by others. Therefore, Alexander further argues that the simplest mechanism that turns censorship into moderation is a switch that, when enabled, lets you see the banned content, which is exactly what HN does. Alexander further argues that there are kinds of censorship that aren't necessarily bad, by this definition, disallowing pedophiles from sharing child porn with each other is censorship, but it's something that we should still do.
[1] https://astralcodexten.substack.com/p/moderation-is-differen...
% curl -vgsSw'< HTTP/size %{size_download}\n' https://t.co/DzIiCFp7Ti 2>&1 | grep '^< \(HTTP/\)\|\(location: \)'
< HTTP/2 301
< location: https://www.threads.net/@chaco_mmm_room
< HTTP/size 0https://blog.redplanetlabs.com/2023/08/15/how-we-reduced-the...
I'd run across an instance of this when the Diaspora* pod I was on (the original public node, as it happens) ceased operations. I found myself wanting to archive my own posts, and was caught in something of a dilemma:
- The Internet Archive's Wayback Machine has a highly-scriptable method for submitting sites, in the form of a URL (see below). Once you have a list of pages you want to archive, you can chunk through those using your scripting tool of choice (for me, bash, and curl or wget typically). But it doesn't capture the comments on Diaspora* discussions.... E.g., <https://web.archive.org/web/20220111031247/https://joindiasp...>
- Archive.Today does not have a mass-submission tool, and somewhat aggressively imposes CAPTCHAs at times. So the remaining option is manual submissions, though those can be run off a pre-generated list of URLs which somewhat streamlines the process. And it does capture the comments. E.g., <https://archive.is/9t61g>
So, if you are looking to archive material, Archive Today is useful, if somewhat tedious at bulk.
(Which is probably why the Internet Archive is the far more comprehensive Web archive.)
% curl -gsSw'%{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7Ti
<head><noscript><META http-equiv="refresh" content="0;URL=https://www.threads.net/@chaco_mmm_room"></noscript><title>https://www.threads.net/@chaco_mmm_room</title></head><script>window.opener = null; location.replace("https:\/\/www.threads.net\/@chaco_mmm_room")</script>4.690000
% curl -gsSIw'%{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7Ti
HTTP/2 200
...
content-length: 272
...
x-response-time: 4524
...
4.660211
The delay is not there for nyti.ms (anymore) but once you use the Safari UA it's handled as 200 response: % curl -gsSIw'foo %{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/4fs609qwWt https://t.co/iigzas6QBx | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 200
foo 0.126043
HTTP/2 200
foo 0.037255
It really does seem that twitter is adding a 4.5s delay to some sites from web browsers. Could be malicious, could be rot... $ time curl -s -b cookies.txt -c cookies.txt -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/DzIiCFp7Ti
[t.co meta refresh page src]
real 0m4.635s
user 0m0.004s
sys 0m0.008s
$ time curl -b cookies.txt -c cookies.txt -A "wget/1.23" -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443: Connection refused
real 0m4.635s
user 0m0.011s
sys 0m0.005s
$ time curl -b cookies.txt -c cookies.txt -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443 Connection refused
real 0m0.129s
user 0m0.000s
sys 0m0.013s
The failed to connects are threads.net likely blocking those user agents but the timing is there which is different than the first UA attempt.> On Tuesday afternoon, hours after this story was first published, X began reversing the throttling on some of the sites, dropping the delay times back to zero. It was unknown if all the throttled websites had normal service restored.
https://archive.is/2023.08.15-210250/https://www.washingtonp...
I would say that it contains chiefly a political part and a cultural part. Some of the pieces in the political part can be apparently well done, informative and interesting, while some others are determined in just blurting out partisan views - arguments not included.
Incidentally: such "polarized literature" seems abundant in today's "globalized" world (where, owing to "strong differences", the sieve of acceptability can have very large gaps). It is also occasionally found here in posts on HN (one of the latest instances just a few browsed pages ago): the occasional post that just states "A is B" with no justification, no foundation for the statement, without realizing that were we interested in personal opinions there are ten billion sources available. And if we had to check them, unranked in filing, an image like Borges' La Biblioteca de Babel could appear: any opinion could be found in some point of the library.
Yes, I have (now) noticed a few contributors (some very prolific) in the Fair Observer are substantially propaganda writers.
But the cultural part, https://www.fairobserver.com/category/culture/ , seems to more consistently contain quality material, with some articles potentially especially interesting. In this area, I have probably seen more bias on some mainstream news outlets.
I think that revolution that is showing valid for journalism today includes this one magazine: the model of The Economist, of having a strong prestigious and selective editorial board (hence its traditional anonymity of the contributors), is now the exception, so you do not read the Magazine but the Journalist. The Magazine will today often publish articles from just anyone; the Reader has today the burden to select the Journalists and follow them.
--
I will write you in a few hours for the repost, thank you.
On the contrary, secret suppression is extremely common. Every social media user has probably been moderated at some point without their knowledge.
Look up a random reddit user. Chances are they have a removed comment in their recent history, e.g. [1].
All comment removals on Reddit are shadow removals. If you use Reddit with any frequency, you'll know that mods almost never go out of their way to notify users about comment removals.
[1] https://www.reveddit.com/y/Sariel007/
archive: https://archive.is/GNudB
I've done this on multiple occasions, e.g.: <>>36191005 >
As I commented above, HN operates through indirect and oblique means. Ultimately it is is a social site managed through culture. And the way that this culture is expressed and communicated is largely through various communications --- the site FAQ and guidelines, dang's very, very, very many moderation comments. Searching for his comments with "please" is a good way to find those, though you can simply browse his comment history:
- "please" by dang: <https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...>
- dang's comment history: <https://news.ycombinator.com/threads?id=dang>
Yes, it means that people's feelings get hurt. I started off here (a dozen years ago) feeling somewhat the outsider. I've come to understand and appreciate the site. It's maintained both operation and quality for some sixteen years, which is an amazing run. If you go back through history, say, a decade ago, quality and topicality of both posts and discussions are remarkably stable: <https://news.ycombinator.com/front?day=2013-08-14>.
If you do have further concerns, raise them with dang via email: <hn@ycombinator.com> He does respond, he's quite patient, might take a day or two for a more complex issue, but it will happen.
And yes, it's slow, inefficient, and lossy. But, again as the site's history shows, it mostly just works, and changing that would be a glaring case of Chesterton's Fence: <https://hn.algolia.com/?q=chesterton%27s+fence>.
X has started reversing the throttling on some of the sites, including NYTimes
Discussions on HN: (61-comments - 2023-08-16) : >>37141478
Twitter post archive: https://archive.is/PW3eG
[0] https://deer-run.com/users/hal/sysadmin/greet_pause.html
Another commenter argued "Increasing cost of attacks is an effective defense strategy."
I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.
Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.
[1] >>37130143
https://www.revsys.com/tidbits/greet_pause-a-new-anti-spam-f...
I get several thousand SPAM attempts per day: I estimate that this one technique kills a large fraction of them. And look how old the feature is...
Not threads.net, cURL User-Agent: 224.3 ms
Not threads.net, Firefox User-Agent: 227.4 ms
threads.net, cURL User-Agent: 223.9 ms
threads.net, Firefox User-Agent: 2743 ms
Twitter is trying to hide this fact? (As they don't make delay w/o browser User-Agent)
(Full log: https://gist.github.com/sevenc-nanashi/c77d18df6a5f326b0d292...)
A forum should not do things that elbow out trustful people.
That means, don't lie to authors about their actioned content. Forums should show authors the same view that moderators get. If a post has been removed, de-amplified, or otherwise altered in the view for other users, then the forum should indicate that to the post's author.
> How do you think spammers and abusers will exploit those options?
Spammers already get around and exploit all of Reddit's secretive measures. Mods regularly post to r/ModSupport about how users have circumvented bans. Now they're asking forums to require ID [1].
Once shadow moderation exists on a forum, spammers can then create their own popular groups that remove truthful content.
Forums that implement shadow moderation are not belling cats. They sharpen cats' claws.
The fact that some spammers overcome some countermeasures in no way demonstrates that:
- All spammers overcome all countermeasures.
- That spam wouldn't be far worse without those countermeasures.[1]
- That removing such blocks and practices would improve overall site quality.
I've long experience online (going on 40 years), I've designed content moderation systems, served in ops roles on multi-million-member social networks, and done analysis of several extant networks (Google+, Ello, and Hacker News, amongst them), as well as observed what happens, and does and doesn't work, across many others.
Your quest may be well-intentioned, but it's exceedingly poorly conceived.
________________________________
Notes:
1. This is the eternal conflict of preventive measures and demonstrating efficacy. Proving that adverse circumstances would have occurred in the absence of prophilactic action is of necessity proving a counterfactual. Absent some testing regime (and even then) there's little evidence to provide. The fire that didn't happen, the deaths that didn't occur, the thefts that weren't realised, etc. HN could publish information on total submissions and automated rejections. There's the inherent problem as well of classifying submitters. Even long-lived accounts get banned (search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...>). Content moderation isn't a comic-book superhero saga where orientation of the good guys and bad guys is obvious. (Great comment on this: <>>26619006 >).
Real life is complicated. People are shades of grey, not black or white. They change over time: "Die a hero or live long enough to become a villian." Credentials get co-opted. And for most accounts, courtesy of long-tail distributions, data are exceedingly thin: about half of all HN front-page stories come from accounts with only one submission in the Front Page archive, based on my own analysis of same. They may have a broader submission history, yes, but the same distribution applies there where many, and almost always most submissions come from people with painfully thin history on which to judge them. And that's assuming that the tools for doing said judging are developed.
You asked me for an alternative and I gave one.
You yourself have expressed concern over HN silently re-weighting topics [1].
You don't see transparent moderation as a solution to that?
> The fact that some spammers overcome some countermeasures in no way demonstrates that...
Once a spammer knows the system he can create infinite amounts of content. When a forum keeps mod actions secret, that benefits a handful of people.
We already established that secrecy elbows out trustful people, right? Or, do you dispute that? I've answered many of your questions. Please answer this one of mine.
> That removing such blocks and practices would improve overall site quality.
To clarify my own shade of grey, I do not support shadow moderation. I support transparent-to-the-author content moderation. I also support the legal right for forums to implement shadow moderation.
[1] >>36435312
https://en.wikipedia.org/wiki/Shadow_banning
> Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.
This part matches shadow banning voting and is basically the same what I wrote in my previous comment just using different words:
> partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user
And this part, which contradicts what you wrote in your last comment:
> More recently, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking.
current 'unique porn domains' = 53,644
current adware, malware, tracking, etc. = 210,425 unique domains