EDIT: Thank you for your response, dang. Hacker News is a special place, which is why we have responded so strongly to today's events - I apologize if the tone above came off as less-than-civil. I (and it seems, many others) look forward to hearing more about the 'dupe' article others have linked to below. It was only upon seeing the article marked as a dupe after seeing the previous flagged out of existence that it began to feel like more than just a user-initiated action, so I am sure further information on the mod-initiated actions will put these fears to rest.
Here's one tip for you guys, from years-long, world-weary experience: if you're coming up with sensational explanations in breathless excitement, it's almost certainly untrue.
Edit: ok, here's what happened. Users flagged https://news.ycombinator.com/item?id=27394925. When you see [flagged] on a submission, you should assume users flagged it because with rare exceptions, that's always why.
A moderator saw that, but didn't look very closely and thought "yeah that's probably garden-variety controversy/drama" and left the flags on. No moderator saw any of the other posts until I woke up, turned on HN, and—surprise!—saw the latest $outrage.
Software marked https://news.ycombinator.com/item?id=27395028 a dupe for the rather esoteric reasons explained here: https://news.ycombinator.com/item?id=27397622. After that, the current post got upvoted to the front page, where it remains.
In other words, nothing was co-ordinated and the dots weren't connected. This was just the usual stochastic churn that generates HN. Most days it generates the HN you're used to and some days (quite a few days actually) it generates the next outlier, but that's how stochastics work, yes? If you're a boat on a choppy sea, sometimes some waves slosh into the boat. If you're a wiggly graph, sometimes the graph goes above a line.
If I put myself in suspicious shoes, I can come up with objections to the above, but I can also answer them pretty simply: this entire thing was a combo of two data points, one borderline human error [1] and one software false positive. We don't know how to make software that doesn't do false positives and we don't know how to make humans that don't do errors. And we don't know how to make those things not happen at the same time sometimes. This is what imperfect systems do, so it's not clear to me what needs changing. If you think something needs changing, I'm happy to hear it, but please make it obvious how you're not asking for a perfect system, because I'm afraid that's not an option.
[1] I will stick up for my teammate and say that this point is arguable; I might well have made the same call and it's far from obvious that it was the wrong call at the time. But we don't need that for this particular answer, so I'll let that bit go.
The reason for distrust is valid. We live in an age of rapidly increasing censorship and the CCPs growing reach of control in American discourse. Skepticism is becoming the default for very real reasons.
You don't have to believe me, of course, but if you decide not to, consider these two simple observations.
First, lying would be stupid, because the good faith of the community is literally the only thing that makes this site valuable. So, sheer self-interest plus not-being-an-idiot should be enough to tip your priors. I may be an idiot about most things, but I hope I'm not incompetent at the most important part of my job. The value of a place like HN can easily disappear in one false step. Therefore the only policy which has ever made any sense is (1) tell the truth; (2) try never to do anything that isn't defensible to the community; and (3) acknowledge when we fuck up and fix it.
Second, if you're going to draw dramatic conclusions about sinister operations, it's good for mental health to have at least one really solid piece of information you can check them against. Otherwise you end up in the wilderness of mirrors. What you see on internet forums—or rather, what you think you see on internet forums, which then somehow becomes what you see because that's how the brain does it—is simply not solid information. Remember what von Neumann said about fitting an elephant? (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) He asked for a mere five degrees of freedom. Nebulous internet spaces give you hundreds at least. That's way beyond enough to justify anything—even dipping in a ladle and getting one ladle's worth is enough to justify anything.
(Edit: people have been asking what Angela Lansbury has to do with this. If you don't mind spoilers, Angela will explain it for you here: https://www.youtube.com/watch?v=p3ZnaRMhD_A.)
My question is: does HN actively attempt to counteract government actors from influencing the site? I think it’s been proven that China among other countries employs folks to try to influence social media sites. Not necessarily by influencing staff, but by creating user accounts who do things like downvote unfavorable comments or flag stories they don’t like.
This seems like it would be a prime target for that behavior.
I guess there is also a "flagging brigade" detector. [If not, I upgrade this comment to a feature request.]
In most cases it is the politics aspect or the unfair coverage aspect that leads users to flag a story, like say on lab leaks; but this story being flagged so easily was interesting. It is about a tech platform intentionally/mistakenly censoring things we will count as free speech.