So let's look at what happened in reality. Almost immediately sub-reddits pop up that are at the very least attempting to skirt the law, and often directly breaching the law- popular topics on reddit included creative interpretations of the age of consent for example, or indeed the requirement for consent at all. Oh and because anyone can create one these communities, the site turns into whack-a-mole.
The second thing that happened was communities popped up pretty much for the sole purpose of harassing's other communities. But enabling this sort of market place of moderation, you are providing a mechanism for a group of people to organize a way to attack your own platform. So now you have to step back in and we're back to censorship.
I also think that this article completely mischaracterizes what the free speech side of the debate want.
It's the threat of law enforcement that leads people who run websites to remove illegal content.
Generically (to, say, please advertisers) that is an expectation that sites are going to be proactive about removing offensive (or illegal) material. Simply responding on a "whack-a-mole" basis is not good enough. I ran a site that had something like 1-in-10,000 offensive (not illegal... but images of dead nazis, people with terrible tumors on their genitals, etc.) images and that was not clean enough for Adsense. From the viewpoint of quality control, particularly the Deming viewpoint of statistical quality control, it is an absolute bear of a problem to find offensive images at that level -- and look at how many people write a paper about some A.I. program that gets 70% accuracy is state of the art.