EDIT: Thank you for your response, dang. Hacker News is a special place, which is why we have responded so strongly to today's events - I apologize if the tone above came off as less-than-civil. I (and it seems, many others) look forward to hearing more about the 'dupe' article others have linked to below. It was only upon seeing the article marked as a dupe after seeing the previous flagged out of existence that it began to feel like more than just a user-initiated action, so I am sure further information on the mod-initiated actions will put these fears to rest.
Here's one tip for you guys, from years-long, world-weary experience: if you're coming up with sensational explanations in breathless excitement, it's almost certainly untrue.
Edit: ok, here's what happened. Users flagged https://news.ycombinator.com/item?id=27394925. When you see [flagged] on a submission, you should assume users flagged it because with rare exceptions, that's always why.
A moderator saw that, but didn't look very closely and thought "yeah that's probably garden-variety controversy/drama" and left the flags on. No moderator saw any of the other posts until I woke up, turned on HN, and—surprise!—saw the latest $outrage.
Software marked https://news.ycombinator.com/item?id=27395028 a dupe for the rather esoteric reasons explained here: https://news.ycombinator.com/item?id=27397622. After that, the current post got upvoted to the front page, where it remains.
In other words, nothing was co-ordinated and the dots weren't connected. This was just the usual stochastic churn that generates HN. Most days it generates the HN you're used to and some days (quite a few days actually) it generates the next outlier, but that's how stochastics work, yes? If you're a boat on a choppy sea, sometimes some waves slosh into the boat. If you're a wiggly graph, sometimes the graph goes above a line.
If I put myself in suspicious shoes, I can come up with objections to the above, but I can also answer them pretty simply: this entire thing was a combo of two data points, one borderline human error [1] and one software false positive. We don't know how to make software that doesn't do false positives and we don't know how to make humans that don't do errors. And we don't know how to make those things not happen at the same time sometimes. This is what imperfect systems do, so it's not clear to me what needs changing. If you think something needs changing, I'm happy to hear it, but please make it obvious how you're not asking for a perfect system, because I'm afraid that's not an option.
[1] I will stick up for my teammate and say that this point is arguable; I might well have made the same call and it's far from obvious that it was the wrong call at the time. But we don't need that for this particular answer, so I'll let that bit go.
Edit: oh - I think that one was actually marked a [dupe] by software. I'd need to double check this, but if so, it's because it interpreted the link to the other thread as a signal of dupiness.
Edit 2: yes, that's what happened. When a submission is heavily flagged and there is a single comment pointing to a different HN thread, the software interprets that as a strong signal of dupiness and puts dupe on the submission. It actually works super well most of the time. In this case it backfired because the comment was arguing the opposite.
You're right that most such software tricks, especially anti-abuse measures, need to be secret in order to stay working.
How easy it would be for bad actors brigading with freshly created accounts, or not-so-freshly created accounts with a history of brigading, to abuse this feature to censor stories?