My current stance is to be a moderation centrist rather than an extremist. There are sharks in the waters of both unmoderated content and overmoderated content. What I'll optimize for is what will facilitate the health of the platform overall. In my eyes things like hate speech, science/holocaust denial, etc, aren't healthy for a platform. I'll almost certainly make errors along the way, like every platform does. While I believe my financial model is new and innovative, I don't have something similar for moderation. I'll be using similar tools as others, so I'll value any input anyone has here.