Go to Twitter and click on a link going to any url on "NYTimes.com" or "threads.net" and you'll see about a ~5 second delay before t.co forwards you to the right address.
Twitter won't ban domains they don't like but will waste your time if you visit them.
I've been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it's obviously deliberate.
What happened to net neutrality? Could it applied for this case?
Try submitting a URL from the following domains, and it will be automatically flagged (but you can't see it's flagged unless you log out):
- archive.is
- watcher.guru
- stacker.news
- zerohedge.com
- freebeacon.com
- thefederalist.com
- breitbart.comHacker News isn't an open-ended political site for people to post weird propaganda.
Edit: about 67k sites are banned on HN. Here's a random selection of 10 of them:
vodlockertv.com
biggboss.org
infoocode.com
newyorkpersonalinjuryattorneyblog.com
moringajuice.wordpress.com
surrogacymumbai.com
maximizedlivingdrlabrecque.com
radio.com
gossipcare.com
tecteem.comWe probably banned it for submissions because we want original sources at the top level.
This is something else - just the ego of one rich guy petulantly satisfying his inner demons.
Is it censorship that the rules of chess say you can't poke someone's queen off the board? We're trying to play a particular game here.
what else could they say that would make you believe them?
you might as well just test it yourself like i did with time wget. it's not like you're going to believe anything anyone writes.
- `time wget https://t.co/4fs609qwWt` -> `0m5.389s`
- `time curl -L https://t.co/4fs609qwWt` -> `0m1.158s`
Perhaps its one of those things that are hard to define. [1] But that doesn't mean clear cases don't exist.
> Is it censorship that the rules of chess say you can't poke someone's queen off the board? We're trying to play a particular game here.
No, but it is clearly political censorship if you only apply the unwritten and secret "rules" of the game to a particular political faction. Also, banning entire domain names is definitely heavy-handed.
I don't think that makes sense. The supposed spammers can just try looking up whether their submissions show up or not when not logged in.
This is precisely why I did believe OP. This is Elon Musk we're talking about.
A five-second delay may be enough to cause a measurably increase in the "stickiness" of Twitter if some people wait <5 seconds before clicking or scrolling onwards to something else.
Then they spend more time generating ad-revenue for Twitter than if they had gone off to the New York Times or something and started browsing over there.
I thought it was about increasing short-term revenue.
> Selective downtime, where the troll finds that the website is down (or really slow) quite often. Not all of the time, because that would tip them off. Trolls are impatient by nature, so they eventually find a more reliable forum to troll.
https://ask.metafilter.com/117775/What-was-the-first-website...
I remember some words that succinctly express something I often observe. To paraphrase:
> Left-wing and Right-wing are terms which make a lot of people falsely believe that they disagree with each other.
It is worth trying to find common ground with people “on the other side”.
I also tried 5 NYT links. All had a very consistent 5 second delay through wget.
I could do more, but I don't care to. Everyone knows Elon has gone redpill, so it wouldn't surprise me if he's "owning the libs", but there also could be a dozen other reasons Twitter might do something like this (including plenty that are not nefarious). I just don't care to dig more...
Edit: I suppose I could have given the specific URLs, but I don't know if/how much t.co links leak info, so I'm not keen to do that. But the delay is absolutely on t.co and not the destination sites, at least as far as external users are concerned. It's possible that t.co queries the sites first before redirecting, and if e.g. the NYT is throttling their traffic that's what's delaying things. I don't know how to disambiguate that, but it's definitely a theory worth considering...
Then why web.archive.org isn't also banned? [1] And what about things which aren't available from the original source anymore?
[1]: >>37130420
I mostly agree. I argued in an article [1] that it's only censorship if the author of the content is not told about the action taken against the content.
These days though, mods and platforms will generally argue that they're being transparent by telling you that it happens. When it happens is another story altogether that is often not shared.
[1] https://www.removednews.com/p/twitters-throttling-of-what-is...
Incompetence before malice, etc...
In fact, such secrecy benefits spammers. Good-faith users never imagine that platforms would secretly action content. So when you look at overall trends, bots, spammers and trolls are winning while genuine users are being pushed aside.
I argued that secrecy benefits trolls in a blog post, but I don't want to spam links to my posts in the comments.
It’s not secret, because they’ll be provided an answer if they email the mod team.
It’s not free as in open source, because it isn’t available for anyone to download and study in full.
So, since it’s not secret, is it public, or private? Since it’s not published in full but any query of LIMIT 1 is answered, is that open, closed, or other?
Restrictions to publication don’t necessarily equate to secrecy, but the best I’ve got is “available upon request”, which isn’t quite right either. Suggestions welcome.
I can assure you that is Not the case with HN: on posting archive.is URL's, proof?
Look at my comment postings : https://news.ycombinator.com/threads?id=archo
Is it possible you have been shadow-banned for poor compliance to the [1]Guidelines & [2]FAQ's?
It's not banned in comments, but it is banned in submissions. @dang (HN's moderator) confirms that here: >>37130177
Even Cory Doctorow made this case in "Como is Infosec" [1].
The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.
[1] https://doctorow.medium.com/como-is-infosec-307f87004563
If you use wget, you see that the delay happens during the first hop with t.co
It also happens with threads.net, instagram, facebook, blueskyweb.xyz
>Please submit the original source. If a post reports on something found on another site, submit the latter.
And explained on numerous occasions by dang.
"Never attribute to malice that which is adequately explained by stupidity."
Or even like some junior dev removed an index
That said, dailykos.com seems to be banned. Happy now?
The opposite would be to show the author of the content some indicator that it's been removed, and I would call that transparent or disclosed moderation.
Interestingly, your comment first appeared to me as "* * *" with no author [2]. I wonder if that is some kind of ban.
[1] https://www.youtube.com/watch?v=8e6BIkKBZpg
[2] https://i.imgur.com/oGnXc6W.png
edit I know you commented again but it's got that "* * *" thing again:
"This domain is not allowed on HN" as an error message upon submission.
Leaning towards there's something else going on deep in the DNS/ad servers/cdn/who knows. Not the first I've seen/heard of resolving delays with t.co... maybe it's even just something with legacy non-SSL links being redirected etc
Tell me this, does Twitter have some kind of "play nice" code that slows down inbound clicks through to a site so it doesn't DDOS other sites? I can easily imagine a scenario where anti DDOS caode would allow small sites to pass through quickly, yet sites under heavy "click through" load are being slightly throttled.
Not exactly cherry-picked, these were from things I submitted myself and noticed that were shadow flagged.
> That said, dailykos.com seems to be banned. Happy now?
No, I'd be happy when archive.is, Federalist and the rest of the non-spammy ones are unbanned. (Also, even if "balanced" censorship was the desired goal, having a single unreliable left-wing source banned vs a ton of right-wing ones doesn't really achieve that.)
although seems unlikely it just happens to be the NYT.
Definitely not random, in any case.
> Also, even if "balanced" censorship was the desired goal,
Nobody claimed that. You merely stated that "I don't see a single left-wing new source in there." and I offered a counter-point.
> having one left-wing source vs a ton of right-wing one doesn't achieve that
I didn't do an exhaustive search for "left-wing domains" that are banned to present you a complete list, this was attempt 1 of 1.
Following your model, I could claim that 100% of left-wing domains are banned, but I won't.
Or did you mean failing to resolve some internal service's hostname?
Anything that's exclusively on a facebook page may as well not exist to me.
Because web.archive.org is generally used for...
... things which aren't available from the original source anymore.
While archive.is is generally used to bypass paywalls. These 2 websites have 2 very distinct missions and use-cases.
But, that said, I'm more interested in the discussions about verification, neutrality, and the reasons that people have for still clinging on for grim death that, in a few hours, will likely be pushed down onto a second page by that huge comment thread currently in the middle of this page and above them.
You don't see this with curl/wget because they use user agent sniffing. If they don't think you're a browser they _will_ give you a Location header. To see it, capture a request in Firefox developer tools, right click on the request, copy as CURL. (May need to remove the Accept-Encoding tag and add -i to see the headers).
Plenty of both left- and right-wing sites are banned and/or downweighted on HN. When a site is primarily about political battle, we either ban it or downweight it. Which of the two we choose depends on how likely the site is to produce the occasional interesting article (in HN's sense of the word "interesting"). That's why The Federalist and World Workers Daily (or whatever it's called) are banned, while National Review and Jacobin are merely downweighted. Both the Guardian and Daily Beast are downweighted, btw, as are most major media sites.
If you or anyone thinks that HN moderation is unfairly ideologically biased, I'm open to the critique, but you guys need to first look at the site as it actually is, and not just look at your own pre-existing perceptions. Every data point becomes a Convincing Proof when you do the latter.
People think that when their team gets moderated, the mods are OMG obviously on the other side. The Other Side feels exactly the same way. This "they're against me" perception is the most reliable phenomenon I've observed on HN. Leftists feel it, rightists feel it, Go programmers feel it, even Rust programmers feel it. Literally the very-most-popular topic on HN at any moment is perceived by someone as Viciously Suppressed because of this perception. Stop and think about that—it's kind of amazing. Someone should write a PhD thesis.
It's basically HN, but you can earn small tips for submissions and comments.
curl -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/117.0" -I "https://t.co/4fs609qwWt"
x-response-time: 4521For example, I've linked to my work, but it never occurred to me to use "Show HN".
Maybe this is no big deal? Or perhaps for new signups, it would be good to “soft force” them to read the FAQ?
Re the 'delay' setting see https://news.ycombinator.com/newsfaq.html.
I haven't dug into the logs, but most probably we saw that https://news.ycombinator.com/submitted?id=thebottomline was spamming HN and banned the sites that they were spamming.
Edit: if you (i.e. anyone) click on those links and don't see anything, it's because we killed the posts. You can turn on 'showdead' in your profile to see killed posts. (This is in the FAQ: https://news.ycombinator.com/newsfaq.html.) Just please don't forget that you turned it on, because it's basically signing up to see the worst that the internet has to offer, and sometimes people forget that they turned it on and then email us complaining about what they see on HN.
You're dang right, trying to play a particular [rigged] game here.
Of the 67k sites banned on HN I would guess that fewer than 0.1% are "news sources", left- or right- or any wing. Why would you expect them to show up in a random sample of 10?
* which it is! I've unkilled >>1236054 for the occasion.
The link you clicked in the NYT bio is not a t.co link - I assume you noticed that but still are using it as counter-proof?
I agree that publishing case (1) causes harm (spammers will just use a different domain if they know you’ve blocked theirs.) But case (2) is rather different. I don’t think the same justification for lack of transparency exists in this case. And I think shadow-banning the submission in case (2) is not very user-friendly. It would be better to just display an error, e.g. “submissions from this site are blocked because we do not believe it is suitable for HN” (or whatever). A new user might post stuff like (2) out of misunderstanding what the site is about rather than malevolence, so better to directly educate them than potentially leave them ignorant. Also, while Breitbart is rather obviously garbage, since we don’t know everything in category (2) on the list, maybe there are some sites on it whose suitability is more debatable or mixed, and its inappropriateness may be less obvious to someone than Breitbart’s (hopefully) is
Content curation is necessary, but shadow moderation is not helping. When a forum removes visible consequences, it does not prepare its users to learn from their mistakes.
I'll admit, I find HN to be more transparently moderated than Reddit and Twitter, but let's not pretend people have stopped trying to game the system. The more secret the rules (and how they are applied), the more a system serves a handful of people who have learned the secret tricks.
Meanwhile, regular users who are not platform experts trust these systems to be transparent. Trustful users spend more time innovating elsewhere, and they are all disrupted by unexpected secretive tricks.
Even if it's deliberate, I don't see how people can complain. Google has outright blocked Breitbart for years. They prevent results from that domain from appearing at all unless you specifically force it with site: and apparently HN does the same. Politically motivated censorship and restricting "reach" is just how Silicon Valley rolls. Pre-Musk Twitter did freeze the New York Post's account and many other much worse things. It'd be a shame for Musk to be doing this deliberately, even though it seems unlikely. But that's the problem with creating a culture where that sort of behavior is tolerated, isn't it? One day it might be turned around on you.
Does the value added by sources like the NYT outweigh the negatives of being occasionally biased or outright wrong? Yes.
how is that? i can understand it not being useful, but how would it help spammers?
Secret suppression is extremely common [1].
Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.
[1] https://www.removednews.com/p/hate-online-censorship-its-way...
attempting to penetrating the site requires ALL the tools in the toolbox.
If the redirects were server side (setting the Location header), a blank referrer remains blank. Client side redirects will set the referral value.
From Twitter’s POV, there’s value in more fully conveying how much traffic they send to sites, even if it minorly inconveniences users.
No buzzwords there, just suspicion there's something else underlying with various technologies that are in play even on a 'simple' link click
Guesses it's crypto bullshit
goes to website
Yep, exactly as expected. Karma alone can mess with incentives, I cannot imagine that adding monetary incentive does anything but make it worse. Also crypto has the reverse-midas-touch from everything I've experienced first-hand or read so adding that into the mix is just another black mark.
If you're going to censor someone, you owe it to them to be honest about what you're doing to them.
i can't see how shadowbanning makes things worse for good-faith users. and evidently it does work against spammers here on HN (though we don't know if it is the shadow or the banning that makes it effective, but i'll believe dang when he says that it does help)
Out of curiosity, what's the rationale for blocking archive.is? Legal reasons I assume?
Since when has moderation actions and relevant data been made available to the lay public here? We cannot look at the site as it actually is. We either have to trust you or pound sand.
>Stop and think about that—it's kind of amazing. Someone should write a PhD thesis about it.
Just because (you think) everyone feels persecuted doesn't mean you're doing a good job keeping things level. It's a common joke to make but it's just a joke. Similarly, if both a rampant nazi, and a fierce tankie hate you, that doesn't make you a bastion of democracy. "Fairness" doesn't mean pissing off everyone equally, and that is neither a necessary or sufficient condition.
These are just minor notes, don't take them too seriously
- `time curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/81.0" -L https://t.co/4fs609qwW` -> 4.730 total
- `time curl -L https://t.co/4fs609qwWt` -> 1.313 total
Same request, the only difference is user-agent.
nitter.net was historically a little less reliable for me due to rate limiting, which is why I initially switched. They worked around the rate limiting issue now, so that may no longer be the case.
It's about whose messages are sidelined, not who gets discouraged.
With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.
Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].
> and evidently it does work against spammers here on HN
It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.
Archive.is shouldn't ever need to be the primary site. Post a link to the original and then a comment to the archive site if there's the possibility of take down or issues with paywalls.
It is likely that people were using archive.is for trying to avoid posting the original domain and masking the content that it presented.
The only "values" that matter are the personal whims of whoever happens to own Twitter, or Google or Facebook.
Yeah, something more like that where the internal service is somehow 'sharded' due to some overly complicated distributed database nonsense, and there's a DNS lookup that is failing. Of course that'd mean the DNS lookup wasn't cached, so you're taking that normal latency on every single hit, which would be terrible architecture. The curl-vs-wget performance isn't explained by that though (although that's a bit weird in and of itself, and might suggest that they had to allow that for some internal tool that they didn't want to punish).
> glibc defaults to 5 sec,
The timeout being close to 5 seconds is what made me wonder about it. Its just off though.
(Even when doing the RightThing(TM) would probably be easier...)
And, BTW, I occasionally get blocked by the mechanisms here, even though not doing anything bad, but understand that there is a trade-off.
I once had the domain 'moronsinahurry' registered, though not with this group in mind...
Yes. And it's really not a close question.
"Regular users" don't have to be platform experts and learn tricks and stuff. They just post normal links and comments and never run into moderation at all.
And I'm feeling HN's position, even though I occasionally trip some of the mechanisms here.
The one that I think makes the most clear sense is "censorship" by a state power. But you must be thinking of something different, because HN is not a state power.
man curl
-b, --cookie <data|filename>
(HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line.
----Add that option to your curl tests.
---
$ time curl -s -b -A "curl/8.2.1" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.245s
user 0m0.087s
sys 0m0.034s
---
$ time curl -s -b -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.265s
user 0m0.103s
sys 0m0.023s
---
$ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.254s
user 0m0.100s
sys 0m0.018
---For example, a recent submission (of mine):
"Luis Buñuel: The Master of Film Surrealism"
it had no discussion space because (I guess) it comes from fairobserver.com . Now, I understand that fairobserver.com may had been an hive of dubious publishing historically, but it makes little sense we cannot discuss Buñuel...
Maybe a rough discriminator (function approximator, Bayesian etc.) could try and decide (based at least on the title) whether a submission from "weak editorial board" sites seems to be material to allow posts or not.
NYT may have more reach and definitely isn't neutral, but it's a far cry from the nonsense that Breitbart publishes. It's nakedly partisan.
:D
> Someone should write a PhD thesis about it
In a perspective it could be related to Multi-Agent Systems (maybe with reference also to Minsky and H. Simon), as a consequence of the narrow view of the single agent, and/or an intrinsic fault of resource optimization.
unless HN is suddenly the government what you've misnomered is moderation, not censorship. Calling censorship just exaggerates your opinion and makes you look unhinged. It's a private website not national news.
It's like a microcosm of capitalism. The users don't realize they hold all of the power, I guess.
Incorporated in Delaware hardly ever affects anything except corporate law.
I really wish the term hadn't been polluted this way.
<>>498910 >
That grew fairly rapidly, it was at 38,719 by 30 Dec 2012:
<>>4984095 > (a random 50 are listed).
I suspect that overwhelmingly the list continues to reflect the characteristics of its early incarnations.
I really like this take on moderation:
"The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience."
From Nilay Patel in https://www.theverge.com/2022/10/28/23428132/elon-musk-twitt...
Re shadowbanning (i.e. banning a user without telling them), see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions. The short version is that when an account has an established history, we tell them we're banning them and why. We only shadowban when it's a spammer or a new account that we have reason to guess is a serial abuser.
The parts that don't work especially well, most particularly discussion of difficult-but-important topics (in my view) ... have also been acknowledged by its creator pg (Paul Graham) and mods (publicly, dang, though there are a few others).
In general: if you submit a story and it doesn't go well, drop a note to the moderators: hn@ycombinator.com. They typically reply within a few hours, perhaps a day or if things are busy or for complex.
You can verify that a submission did or didn't go through by checking on the link from an unauthenticated (logged-out) session.
That domain is a borderline case. Sometimes the leopard really changes its spots, i.e. a site goes from offtopic or spam to one that at least occasionally produces good-for-HN articles. In such cases we simply unban it. Other times, the general content is still so bad for HN that we have to rely on users to vouch for the occasional good submission, or to email us and get us to restore it. I can't quite tell where fairobserver.com is on this spectrum because the most recent submission (yours) is good, the previous one (from 7 months ago) is borderline, and before that it was definitely not good. But I've unbanned it now and moved it into the downweighted category, i.e. one notch less penalized.
1. Open incognito window in Chrome
2. Visit https://t.co/4fs609qwWt -> 5s delay
3. Open a second tab in the same window -> no delay
4. Close window, start a new incognito session
5. Visit https://t.co/4fs609qwWt -> 5s delay returnsWe don't publish a moderation log for reasons I've explained over the years- if you or anyone wants to know more, see the past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... and let me know if you still have questions.
Not publishing a mod log doesn't mean that we don't want to be transparent, it means that there's a tradeoff between transparency and other concerns. Our resolution of the tradeoff is to answer questions when we get asked. That's not absolute transparency but it's not nothing. Sometimes people say "well but why should we trust that", but they would say that about a moderation log as well.
Re your second paragraph: I agree! and I don't think I've claimed otherwise. In fact, the lazy centrist argument is a pet peeve (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
It's true that the way I post about these things ("both sides hate us") gets mistaken for the obvious bad argument ("therefore we must be in the happy middle", or as Scott Thompson put it years ago, "we're the porridge that Goldilocks ate!"), but that's because the actual argument is harder to lay out and I'm not sure that anybody cares.
Your humble anonymous tipster would appreciate if you do a little legwork.
% curl -gsSIw'foo %{time_total}\n' -- https://t.co/4fs609qwWt https://t.co/iigzas6QBx | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 301
location: https://nyti.ms/453cLzc
foo 0.119295
HTTP/2 301
location: https://www.gov.uk/government/news/uk-acknowledges-acts-of-genocide- committed-by-daesh-against-yazidis
foo 0.037376It seems we've become a society that rewards bad practices with attention which is all any company on the web is trying to get, your attention.
Here's a simpler test I think replicates what I am indicating in GP comment, with regards to cookie handling:
Not passing a cookie to the next stage; pure GET request:
$ time curl -s -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > nocookie.html
real 0m4.916s
user 0m0.016s
sys 0m0.018s
Using `-b` to pass the cookies _(same command as above, just adding `-b`)_ $ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > withcookie.html
real 0m1.995s
user 0m0.083s
sys 0m0.026s
Look at the differences in the resulting files for 'with' and 'no' cookie. One redirect works in a timely manner. The other takes the ~4-5 seconds to redirect.This is a big problem with trying to explain these things - people mean very different things by the same words, and it leads to misunderstanding.
Re archive.is - see >>37130177
As for "why archive.org and not archive.is" - that's a bit of a borderline call, but gouggoug pointed out some of it at >>37130890 . The set of articles which (a) are no longer on the web, (b) are not on archive.org, but (c) are on archive.is, isn't that big. Paywall workarounds are a different thing, because the original URLs are still on the web (albeit paywalled). For those, we want the original URL at the top level, because it's important for the domain to appear beside the title.
Otherwise, HN's rule is to "submit the original source": <https://news.ycombinator.com/newsguidelines.html>
I suppose that might be clarified as "most original or canonical", but Because Reasons HN's guidelines are written loosely and interpreted according to HN's Prime Directive: "anything that gratifies one's intellectual curiosity" <>>508153 >.
> Yuri Orlov: [Narrating] Every faction in Africa calls themselves by these noble names - Liberation this, Patriotic that, Democratic Republic of something-or-other... I guess they can't own up to what they usually are: the Federation of Worse Oppressors Than the Last Bunch of Oppressors. Often, the most barbaric atrocities occur when both combatants proclaim themselves Freedom Fighters.
On the other hand, no single political or ideological position has a monopoly on intellectual curiosity either—so by the same principle, HN can't be moderated for political or ideological position.
It's tricky because working this way conflicts with how everyone's mind works. When people see a politically charged post X that they don't like, or when they see a politically charged post Y that they do like, but which we've moderated, it's basically irresistible to jump to the conclusion "the mods are biased". This is because what we see in the first place is conditioned by our preferences - we're more likely to notice and to put weight on things we dislike (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). People with opposite preferences notice opposite data points and therefore "see" opposite biases. It's the same mechanism either way.
In reality, we're just trying to solve an optimization problem: how can you operate a public internet forum to maximize intellectual curiosity? That's basically it. It's not so easy to solve though.
[Edit:] I'm still seeing it with threads.net:
curl -v -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7TiModeration is the removal of content that objectively doesn’t belong in context, eg spam
Obviously that moderation definition is nuanced bc some could argue that Marxist ideas don’t belong in the context of a site with a foundation in startups. And indeed Marxist ideas often get flagged here
You have to open in the app.
I suppose a sufficiently motivated spammer might incorporate that as a submission workflow check.
% curl -gsSIw'foo %{time_total}\n' https://t.co/DzIiCFp7Ti | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 301
location: https://www.threads.net/@chaco_mmm_room
foo 0.123137
Doesn't matter if I do a HTTP/2 HEAD or GET: % curl -gsSw'%{time_total}\n' https://t.co/DzIiCFp7Ti
0.121503
HTTP/1.1 also shows no delay: % curl -gsSw'%{time_total}\n' --http1.1 https://t.co/DzIiCFp7Ti
0.120044
I chalk this up to rot at X/twitter that is being fixed now that it was noticed.That's because you're not spoofing the User-Agent to be a browser rather than curl.
So far as I'm aware, no, and there are comments from dang and pg going back through the site history which argue strongly against distinguishing groups of profiles in any way.
The one possible exception is that YC founder's handles appear orange to one another at one point in time (pg discusses this in January 2013: <>>5025168 >). The feature was disabled for performance reasons.
Dang mentions the feature still being active as of a year ago: <>>31727636 >
I seem to recall a pg or dang discussion where showing this publicly created a social tension on the site, as in, one set of people distinguished from another.
dang discusses the (general lack of) secret superpowers here: <>>22767204 >, which reiterates what's in the FAQ:
HN gives three features to YC: job ads (see above) and startup launches get placed on the front page, and YC founder names are displayed to other YC alumni in orange.
<https://news.ycombinator.com/newsfaq.html>
Top-100 karma lands you on the leaderboard: <https://news.ycombinator.com/leaders>. That's currently 41,815+ karma. There are also no special privileges here other than occasionally being contacted by someone. (I've had inquiries about dealing with the head-trip of being on the leaderboard, and a couple of requests to boost submissions, which I forward to the moderation team).
% curl -vgsSIw'> %{time_total}\n' -b -A "curl/8.2.1" https://t.co/DzIiCFp7Ti 2>&1 | grep '^\(* WARNING: \)\|\(Could not resolve host: \)\|>'
* WARNING: failed to open cookie file "-A"
* Could not resolve host: curl
curl: (6) Could not resolve host: curl
* WARNING: failed to open cookie file "-A"
> HEAD /DzIiCFp7Ti HTTP/2
> Host: t.co
> User-Agent: curl/8.1.2
> Accept: */*
>
> 0.013309
> 0.1124940.02% of 10,000 is 2 - pretty small
0.02% of 1,000,000,000 is 200,000 ... kinda big :)
> Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform.
> Censorship is the abnormal activity of ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship.
Censorship is somewhat subjective, something that you might find offensive and want moderated might not be considered so by others. Therefore, Alexander further argues that the simplest mechanism that turns censorship into moderation is a switch that, when enabled, lets you see the banned content, which is exactly what HN does. Alexander further argues that there are kinds of censorship that aren't necessarily bad, by this definition, disallowing pedophiles from sharing child porn with each other is censorship, but it's something that we should still do.
[1] https://astralcodexten.substack.com/p/moderation-is-differen...
% curl -vgsSw'< HTTP/size %{size_download}\n' https://t.co/DzIiCFp7Ti 2>&1 | grep '^< \(HTTP/\)\|\(location: \)'
< HTTP/2 301
< location: https://www.threads.net/@chaco_mmm_room
< HTTP/size 0https://blog.redplanetlabs.com/2023/08/15/how-we-reduced-the...
<head><noscript><META http-equiv="refresh" content="0;URL=https://www.threads.net/@chaco_mmm_room"></noscript><title>https://www.threads.net/@chaco_mmm_room</title></head><script>window.opener = null; location.replace("https:\/\/www.threads.net\/@chaco_mmm_room")</script>I'd run across an instance of this when the Diaspora* pod I was on (the original public node, as it happens) ceased operations. I found myself wanting to archive my own posts, and was caught in something of a dilemma:
- The Internet Archive's Wayback Machine has a highly-scriptable method for submitting sites, in the form of a URL (see below). Once you have a list of pages you want to archive, you can chunk through those using your scripting tool of choice (for me, bash, and curl or wget typically). But it doesn't capture the comments on Diaspora* discussions.... E.g., <https://web.archive.org/web/20220111031247/https://joindiasp...>
- Archive.Today does not have a mass-submission tool, and somewhat aggressively imposes CAPTCHAs at times. So the remaining option is manual submissions, though those can be run off a pre-generated list of URLs which somewhat streamlines the process. And it does capture the comments. E.g., <https://archive.is/9t61g>
So, if you are looking to archive material, Archive Today is useful, if somewhat tedious at bulk.
(Which is probably why the Internet Archive is the far more comprehensive Web archive.)
% curl -gsSw'%{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7Ti
<head><noscript><META http-equiv="refresh" content="0;URL=https://www.threads.net/@chaco_mmm_room"></noscript><title>https://www.threads.net/@chaco_mmm_room</title></head><script>window.opener = null; location.replace("https:\/\/www.threads.net\/@chaco_mmm_room")</script>4.690000
% curl -gsSIw'%{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7Ti
HTTP/2 200
...
content-length: 272
...
x-response-time: 4524
...
4.660211
The delay is not there for nyti.ms (anymore) but once you use the Safari UA it's handled as 200 response: % curl -gsSIw'foo %{time_total}\n' -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/4fs609qwWt https://t.co/iigzas6QBx | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
HTTP/2 200
foo 0.126043
HTTP/2 200
foo 0.037255
It really does seem that twitter is adding a 4.5s delay to some sites from web browsers. Could be malicious, could be rot...As that rich guy happens to be the CEO, how is this not the prime example of "prioritising internal politics above what end users want"?
$ time curl -s -b cookies.txt -c cookies.txt -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/DzIiCFp7Ti
[t.co meta refresh page src]
real 0m4.635s
user 0m0.004s
sys 0m0.008s
$ time curl -b cookies.txt -c cookies.txt -A "wget/1.23" -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443: Connection refused
real 0m4.635s
user 0m0.011s
sys 0m0.005s
$ time curl -b cookies.txt -c cookies.txt -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443 Connection refused
real 0m0.129s
user 0m0.000s
sys 0m0.013s
The failed to connects are threads.net likely blocking those user agents but the timing is there which is different than the first UA attempt.Operators of public sites should NOT have to pay that tax. So you are best are not fully aware of the actual cost, IMHO.
Congrats to HN for striking a reasonable pragmatic balance.
*I had some of the first live (non-academic) Internet connectivity in the UK, and the very very first packets were hacking attempts...
> On Tuesday afternoon, hours after this story was first published, X began reversing the throttling on some of the sites, dropping the delay times back to zero. It was unknown if all the throttled websites had normal service restored.
https://archive.is/2023.08.15-210250/https://www.washingtonp...
Blame the trolls that prevent us from having nice things.
I have a very different way of looking at this. It's not us that gives attention. It is them that take it via exploiting our evolved inflexible cognitive systems for attention/reward/desire/anger/lust. We are moths to a flame. The moth's free will isn't to blame for its inability to avoid it. Our cognitive systems are fixed, we can't just turn them off. If a sufficiently powerful dopamine-inducing technology is made, you can't just "opt out". It is not as simple as that. Any individual variation in the ability to opt out likely comes down to variation in genetics or other extraneous factors not inside one's immediate control.
This is where regulation needs to come in. Once you accept the reality that opting out is a comforting yet false illusion, you can then do something about it.
I would say that it contains chiefly a political part and a cultural part. Some of the pieces in the political part can be apparently well done, informative and interesting, while some others are determined in just blurting out partisan views - arguments not included.
Incidentally: such "polarized literature" seems abundant in today's "globalized" world (where, owing to "strong differences", the sieve of acceptability can have very large gaps). It is also occasionally found here in posts on HN (one of the latest instances just a few browsed pages ago): the occasional post that just states "A is B" with no justification, no foundation for the statement, without realizing that were we interested in personal opinions there are ten billion sources available. And if we had to check them, unranked in filing, an image like Borges' La Biblioteca de Babel could appear: any opinion could be found in some point of the library.
Yes, I have (now) noticed a few contributors (some very prolific) in the Fair Observer are substantially propaganda writers.
But the cultural part, https://www.fairobserver.com/category/culture/ , seems to more consistently contain quality material, with some articles potentially especially interesting. In this area, I have probably seen more bias on some mainstream news outlets.
I think that revolution that is showing valid for journalism today includes this one magazine: the model of The Economist, of having a strong prestigious and selective editorial board (hence its traditional anonymity of the contributors), is now the exception, so you do not read the Magazine but the Journalist. The Magazine will today often publish articles from just anyone; the Reader has today the burden to select the Journalists and follow them.
--
I will write you in a few hours for the repost, thank you.
> You can verify that a submission did or didn't go through by checking on the link from an unauthenticated (logged-out) session.
Trustful users do not think to do this, and it would not be necessary if the system did not keep the mod action secret.
On the contrary, secret suppression is extremely common. Every social media user has probably been moderated at some point without their knowledge.
Look up a random reddit user. Chances are they have a removed comment in their recent history, e.g. [1].
All comment removals on Reddit are shadow removals. If you use Reddit with any frequency, you'll know that mods almost never go out of their way to notify users about comment removals.
[1] https://www.reveddit.com/y/Sariel007/
archive: https://archive.is/GNudB
How can one see the site "as it actually is" when the decisions are kept secret from submitters?
> People think that when their team gets moderated, the mods are OMG obviously on the other side. The Other Side feels exactly the same way.
This will always be a thing. But it's also true that society is more divided now than it was 20 years ago. We find ourselves unable to communicate across ideological divides and we resort to shouting or in some cases violence. Some effort must be made to improve communication, and transparency for content authors is a minimal step towards that.
No research has been done about whether shadow moderation is good or bad for discourse. It was simply adopted by the entire internet because it's perceived as "easier." Indeed, for platforms and advertisers, it certainly is an easier way to control messaging. It fools good-faith users all the time. I've shared examples of that elsewhere in this thread.
(I'll occasionally note an egregiously-behaving account that doesn't seem to have been already banned.)
Those who have been advised to do so, through the Guidelines, FAQ, comments, or moderator notes, do, to their advantage.
(I'd had a submission shadowbanned as it came from the notoriously flameworthy site LinkedIn a month or few back. I noticed this, emailed the mods, and got that post un-banned. Just to note that the process is in place, and does work.)
I've done this on multiple occasions, e.g.: <>>36191005 >
As I commented above, HN operates through indirect and oblique means. Ultimately it is is a social site managed through culture. And the way that this culture is expressed and communicated is largely through various communications --- the site FAQ and guidelines, dang's very, very, very many moderation comments. Searching for his comments with "please" is a good way to find those, though you can simply browse his comment history:
- "please" by dang: <https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...>
- dang's comment history: <https://news.ycombinator.com/threads?id=dang>
Yes, it means that people's feelings get hurt. I started off here (a dozen years ago) feeling somewhat the outsider. I've come to understand and appreciate the site. It's maintained both operation and quality for some sixteen years, which is an amazing run. If you go back through history, say, a decade ago, quality and topicality of both posts and discussions are remarkably stable: <https://news.ycombinator.com/front?day=2013-08-14>.
If you do have further concerns, raise them with dang via email: <hn@ycombinator.com> He does respond, he's quite patient, might take a day or two for a more complex issue, but it will happen.
And yes, it's slow, inefficient, and lossy. But, again as the site's history shows, it mostly just works, and changing that would be a glaring case of Chesterton's Fence: <https://hn.algolia.com/?q=chesterton%27s+fence>.
> You don't see this with curl/wget because they use user agent sniffing. If they don't think you're a browser they _will_ give you a Location header. To see it, capture a request in Firefox developer tools, right click on the request, copy as CURL.
But that's selective education. You don't do it for every shadow moderated comment. The trend is still that shadow moderation more often disadvantages trustful users. Will you acknowledge that harm?
Over 50% of Reddit users have a removed comment in their recent history that they likely were not told about. When shadow moderation is in play, abuse runs rampant among both mods and users. Both find more and more reasons to distrust each other.
A location header is nearly unnoticeable, a meta refresh page gives you a flash of a blank interstitial screen.
(Not that I had the same annoyance, just explaining the difference to the end user of the two approaches)
X has started reversing the throttling on some of the sites, including NYTimes
Discussions on HN: (61-comments - 2023-08-16) : >>37141478
Twitter post archive: https://archive.is/PW3eG
But at least I can hold them responsible for violating their own stated values. The former Twitter leadership just hid content that didn't fit theirs or third parties sensitivities and told me they are doing me a favor.
Restricting speech is always in the interests of those that have the power to shape discussions, so limiting speech is always counter productive.
[0] https://deer-run.com/users/hal/sysadmin/greet_pause.html
The internet has run on secrets for 40 years. That doesn't make it right. Now that everyone and their mother is online, it's time to consider the harms that secrets create.
Another commenter argued "Increasing cost of attacks is an effective defense strategy."
I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.
Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.
[1] >>37130143
Those two are enormously different, though. I'd consider myself an advocate, just as anyone who believes in a fair and free democracy should. But I am very far from being an absolutist — and I have a secret suspicion that nobody actually is. Musk certainly isn't.
I glossed past this on first read because "some url shortener has shitty behavior" wasn't interesting to me. Hearing heard about twitter's throttling someplace else made me come back here because I was surprised not to have heard it about it here first.
*: stalwart radical who doesn't use twitter
https://www.revsys.com/tidbits/greet_pause-a-new-anti-spam-f...
I get several thousand SPAM attempts per day: I estimate that this one technique kills a large fraction of them. And look how old the feature is...
I don't consider GreetPause to be a form of shadow moderation because the sender knows the commands were rejected. The issue with shadow moderation on platforms is that the system shows you one thing while showing others something else.
Legally speaking, I have no problem with shadow moderation. I only argue it's morally wrong and bad for discourse. It discourages trust and encourages the growth of echo chambers and black-and-white thinking.
Next is misinformation and tomorrow you wonder why you cannot state your opinion anymore. A cycle that has been repeated ad nauseum. It just isn't a smart solution and causes more problems than it solves.
No such spam folder is provided to the public on social media.
Maybe the biggest challenge is defining what constitutes "spam." While some cases seem clear-cut (e.g., repeated identical messages from bots, malware, phishing), others are quite subjective. Subtle marketing? Aggressive marketing? Repetitive but sincere advocacy for a cause? Repetitive but insincere trolling? Repetitive but sincere trolling?
All this seems rather obvious, so I was kind of surprised to see how many people bought into Elon's vision for Twitter, it was never workable.
That said, I agree the government probably shouldn't be involved here for the most part (slippery slope, government is a blunt tool, etc.). As long as your "speech" isn't actually harming someone (harassment, revenge porn, incitement, etc.)
As long as we're defending scoundrels it's worth remembering we already lack so many protections for non-scoundrels. In a lot of states you can be fired if your boss hears a whiff of collective bargaining. But I digress.
This is not true. Restricting hate speech is an obvious counterexample.
It's quite possible the reason the list isn't public is because it would give away information about what thought is allowed and what thought isn't.
Only if the recipient sent a false response.
If the response were misrepresented then I would object to the technique. But it doesn't sound like that's what happens.
This is the murkiest part to me since it's not just a binary flag.
That there are limited worker protections in countries is a different problem, but is certainly not inhibited by too much speech, quite the contrary it would worsen the situation further. Civil liberties never suffered because too much speech was allowed, so the perspective to err on the side of freedom is only logical.
> there is simply no debate to be had about the basic humanity of certain classes of people
That is just an invalid generalization.
It is a bad idea and damaging and there is ample empirical evidence for that.
Not threads.net, cURL User-Agent: 224.3 ms
Not threads.net, Firefox User-Agent: 227.4 ms
threads.net, cURL User-Agent: 223.9 ms
threads.net, Firefox User-Agent: 2743 ms
Twitter is trying to hide this fact? (As they don't make delay w/o browser User-Agent)
(Full log: https://gist.github.com/sevenc-nanashi/c77d18df6a5f326b0d292...)
How do you think spammers and abusers will exploit those options?
Again: HN works in general, and the historical record strongly confirms this, especially as compared with alternative platforms, Reddit included, which seems to be suffering its own failure modes presently.
The "certain way" is the experience of moderating HN. Publishing the list would help spammers know how to better circumvent it.
No, because it’s not an HTTP redirect. It’s an HTML page that redirects you using a meta tag, something that the browser doesn’t cache.
The former Twitter leadership was very clear about what sort of content would be his. And is was based entirely on the type of content ahead of time. Critiquing this sort of content policy is like saying that newspapers should not be allowed to have clear standard for what is publishable in classified ads.
All claims of "I'm being oppressed" by Twitter policies have been absolutely ridiculous, and discrediting to supposed free speech advocate/absolutist positions.
Similarly discrediting is the silence on Musk's attacks on the free web and attempts at censorship of specific disprefeerred news outlets.
We all see what gets fought ago and what is not faught against, and the answer is clears the right to attack and intimidate groups with threatening behavior is defending, but actual censorship of reasonable discourse is tolerated.
It was an article about Eileen O’Shaughnessy - George Orwell's wife (I suppose this could raise interest, possibly also yours).
I have seen in that text unneeded references to Orwell's most private matters - as if spying in Mr. Blair's rooms.
And this should tell us how hints ("Well, it was published there"), while valuable to have at least some tentative initial ranking, are unfortunately not useful for reliable discrimination.
It probably goes without saying that this would be an extremely unpleasant place, but there would be nowhere else to go once the last platform won.
What we have today is a number of smaller social networks, each with a different strategy to shape the conversation. It may very well be true that the creators of a platform choose editorial methods and goals that resonate with them personally, but what’s important to the dynamic of the platforms and free speech is that until we are all on that one terrible platform, that methods used to moderate your speech are nothing more than a company’s efforts to differentiate their product from others.
Restricting speech is in the interest of product differentiation. This, of course, is in the interest of the owner of the product, but it is always also in the interest of the consumer who wants a rich speech market to choose from, and who loathes the idea of a global 4chan style megasite to the exclusion of all other social media. This is why failure to limit speech in the context of a coherent speech product is always counterproductive.
A forum should not do things that elbow out trustful people.
That means, don't lie to authors about their actioned content. Forums should show authors the same view that moderators get. If a post has been removed, de-amplified, or otherwise altered in the view for other users, then the forum should indicate that to the post's author.
> How do you think spammers and abusers will exploit those options?
Spammers already get around and exploit all of Reddit's secretive measures. Mods regularly post to r/ModSupport about how users have circumvented bans. Now they're asking forums to require ID [1].
Once shadow moderation exists on a forum, spammers can then create their own popular groups that remove truthful content.
Forums that implement shadow moderation are not belling cats. They sharpen cats' claws.
The fact that some spammers overcome some countermeasures in no way demonstrates that:
- All spammers overcome all countermeasures.
- That spam wouldn't be far worse without those countermeasures.[1]
- That removing such blocks and practices would improve overall site quality.
I've long experience online (going on 40 years), I've designed content moderation systems, served in ops roles on multi-million-member social networks, and done analysis of several extant networks (Google+, Ello, and Hacker News, amongst them), as well as observed what happens, and does and doesn't work, across many others.
Your quest may be well-intentioned, but it's exceedingly poorly conceived.
________________________________
Notes:
1. This is the eternal conflict of preventive measures and demonstrating efficacy. Proving that adverse circumstances would have occurred in the absence of prophilactic action is of necessity proving a counterfactual. Absent some testing regime (and even then) there's little evidence to provide. The fire that didn't happen, the deaths that didn't occur, the thefts that weren't realised, etc. HN could publish information on total submissions and automated rejections. There's the inherent problem as well of classifying submitters. Even long-lived accounts get banned (search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...>). Content moderation isn't a comic-book superhero saga where orientation of the good guys and bad guys is obvious. (Great comment on this: <>>26619006 >).
Real life is complicated. People are shades of grey, not black or white. They change over time: "Die a hero or live long enough to become a villian." Credentials get co-opted. And for most accounts, courtesy of long-tail distributions, data are exceedingly thin: about half of all HN front-page stories come from accounts with only one submission in the Front Page archive, based on my own analysis of same. They may have a broader submission history, yes, but the same distribution applies there where many, and almost always most submissions come from people with painfully thin history on which to judge them. And that's assuming that the tools for doing said judging are developed.
You asked me for an alternative and I gave one.
You yourself have expressed concern over HN silently re-weighting topics [1].
You don't see transparent moderation as a solution to that?
> The fact that some spammers overcome some countermeasures in no way demonstrates that...
Once a spammer knows the system he can create infinite amounts of content. When a forum keeps mod actions secret, that benefits a handful of people.
We already established that secrecy elbows out trustful people, right? Or, do you dispute that? I've answered many of your questions. Please answer this one of mine.
> That removing such blocks and practices would improve overall site quality.
To clarify my own shade of grey, I do not support shadow moderation. I support transparent-to-the-author content moderation. I also support the legal right for forums to implement shadow moderation.
[1] >>36435312
What gets a website censored, in the modern corporation-dominated Internet, is going against the interests and preferences of Big Tech owners - and nothing else. Nobody with any power is bound to look out for the public interest, however defined; ICANN is perhaps the only exception that comes to mind.
We can waste our time and attention debating over which targets were more or less deserving of censorship, based on our personal ideas of public interest. But as long as Big Tech is allowed to exist in its current form, we're like powerless peasants arguing about the decisions of kings.
For sites in this category (i.e. not banned, but downweighted) we don't distinguish between political sites, major media sites, sensational bloggy sites and so on. They're all in the same bucket.
If the whole purpose of it is to have browsers send a Referer header, I don't think it's that bad. Even from a privacy perspective, you can configure browsers to not send that header anyway.
Yes, and this is irrelevant to your previous comment: caching the HTML doesn’t cache the redirect itself.
Doctors smoke it
Nurses smoke it
Judges smoke it
Even lawyer too
If the past is any indication, Nitter will be back again eventually, but every time Nitter breaks I drift further and further from caring about Twitter/X at all.
https://en.wikipedia.org/wiki/Shadow_banning
> Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.
This part matches shadow banning voting and is basically the same what I wrote in my previous comment just using different words:
> partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user
And this part, which contradicts what you wrote in your last comment:
> More recently, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking.
current 'unique porn domains' = 53,644
current adware, malware, tracking, etc. = 210,425 unique domainsWhat I mean to say is: I do see the logic of downgrading links to SN, because it is not usually an original source.