Shine on you crazy diamonds!
The point of debate here is how to divide moderation from censorship.
I'd argue that size and power matter most. How you moderate is a technicality. It makes the difference between good and bad moderation, but it doesn't make the difference between moderation and censorship. This article's tips might make your moderation better. They will not make censorship into moderation.
HN's moderation is moderation because HN isn't a medium monopoly like meta, twitter or alphabet. If HN's moderation, intentionally or incidentally, suppresses negative opinions about tensorflow... that's still not censorship. It might be biased moderation, but the web is big and local biases are OK.
It's OK to have a newspaper, webforum or whatnot that supports the christian democrats and ridicules socialists. It's not OK if all the newspapers must do this. That's twitters problem. "Moderation" applies to the medium as a whole.
Anarchy does not want "no rules" it wants "no rulers."
I agree that moderation is necessary. That does not mean that "moderation" on youtube is not censorship. Both can be true. Maybe we can't have free speech, medium monopolies and a pleasant user experience. One has to give.
Moderation is about how things are said.
They are not mutually exclusive.
HN works because it is a tech forum and can ban religion/politics as it sees fit. We get lots of signal and filter out what we'd otherwise consider noise.
The issue is this doesn't work in generalist situations. Where my signal is your noise, or vice versa, people tend to do one of two things. Filter your noise, or increase their signal.
And thus goes back to the problem of giants. The noise battles we see will use every tool available to attempt to win, legal, political, or illegal. This is where splitting up the giants into smaller control zones with varied views tends to help with moderation.
Which is why I’ve taken the view that the actual solution is in the antitrust space, and not the moderation regulation space.
The problem isn’t that Twitter, Facebook, etc moderate in a way that’s biased, it’s that no entity should be so powerful that their biased moderation becomes a problem for society as a whole.
The hypothetical is too reductive to be helpful in making that decision. There are other datapoints and social framing that would be needed to answer your question. As it stands, it's like having one equation and ten unknowns and asking what the solution is - it depends.
There have been a handful (fewer than 10) extreme cases over the years where we temporarily blocked someone from posting, but that almost never happens—it's an emergency measure.
For anyone wondering why we allow certain banned accounts to keep posting, even though what they post is so dreadful, the answer is that if we didn't, they would just create a new account and that new account would start off unbanned, which would be a step backwards.