Mostly only illegal content should be removed.
And that's how you end up with a Nazi bar. Sure a swastica tattoo isn't illegal; nor is saying death to jews. But most people don't want to drink in that environment so if you don't kick (censor) the nazi's out then your business will not get non-nazi patronage (and maybe go under). The "free market" has determined that censorship is desired.
Did you read the article? Removing illegal content (like how to make a bomb) is a censorship, because sender and receiver are happy with it. We need to bring arguments like "the good of a society" to justify removing illegal content.
But harassment, threats and suchlike no one wants to get. Sender may be happy to send it, while receiver is not happy to receive. We need not to resort to a good for a society explaining, why these should be banned.
I don’t think Twitter is wrong, and it is not really different from Apple not letting pornography into the App Store, but it is still deeply troubling to me at some level. And it is neither moderation like discussed in the article or censorship as discussed in the article.
> preferably by offering moderation too cheap to meter
this is not happening. Moderation, at least good moderation, is inherently labor intensive. It requires judgment, especially when people start specifically trying to find your corner cases, which is basically as soon as you start trying to scale. So good moderation will always be expensive (for someone, if not directly to the bulk of users, thanks HN mods).
Power to the people.
I don't think it actually would make them happy or happier
This group has a problem with the content existing at all, self moderation tools have been suggested and implemented in other contexts, and in a limited degree on twitter today (mute and block) and that is not seen as "good enough"
The group that opposes free speech does not just want to be in a self guided safe space no they want to ensure no one can says things they have deemed hate speech or misinformation. Many of this group also want to go further and punish people outside of the platform they spoke incorrectly on
So to imply self moderation tools is the solution completely misunderstands the goals of the "avoid-harassment side" which is to control and narrow the Overton window
Idk where u from, but here in Russia it's not only socially unacceptable but is also a literal crime which falls under incitement of violence and rehabilitation of Nazi ideology.
In the digital space, filters work just fine. Your messages will live in the same database as the Nazi's, but you can be completely isolated from seeing them if you set up your filters accordingly.
And that's the precise point at which they cease to be a "middleman" and become a publisher.
Deleting a comment because it is insulting a person is moderation. Deleting a comment because you don't like it, it doesn't conform to your views or you find it outrageous, silly, inflammatory, false, fake is censorship.
The blockade was so effective that I thought it was a hoax until recently.
Second: is this about freedom of speech? If it is, say so, because moderation nor censorship exclusively define that. Muddying the debate by giving some weird definition of two concepts isn't going to help that.
These groups really like to spread lies about people and events. They love to manipulate third parties or institutions into attacking you too.
Your comment brings this out because some subset of "outrageous, silly, inflammatory, false, fake" is right on the cusp, and making those calls (to moderate or not to moderate?) puts tremendous pressure on one's own feelings and beliefs. What helps one do it neutrally is self-awareness, but that is the scarcest thing there is. It takes a decade of hard work to distill a bit more. (Edit: and I'm not claiming to have much; just that it's needed.)
I'm uncomfortable with the "false" / "fake" end of your spectrum because we don't have a truth meter. Who am I to decide what's true or false or "mis" or "dis" for anybody else? I'm not taking on that karma.
"Inflammatory" is easier to work with because it's about predictable community effects and one can moderate for community cohesion. Moderating that way ends up excluding truths that the community can't withstand, but such truths probably exist for any community. Groups may even be defined by what they exclude. We can try to stretch those limits but there's only so much elasticity available.
I blanched when I read "Moderation is the normal business activity of ensuring that your customers like using your product" in the OP, but actually that's basically saying moderation is about community cohesion and I can't disagree. But the secret agenda, on HN at least, is to stretch it.
So you want a moderator to moderate. but then you also want to have tools to see what has been moderated away and unlock those? Right? So moderate yes, but also unmoderate by the users.
Power to the people!
...Is it? I mean, compared to Twitter, sure, but for a private forum I think of it as relatively large.
The article's distinction between moderation and censorship feels like the difference between a freedom fighter and a terrorist - i.e. if you are sympathetic to their cause you use the more positive euphamism, but there really isn't an objective difference.
At most the distinction the article seems to be making is that moderation should be optional and censorship forced - you should be able to choose to see the dead comments if you want (nevermind that that is hardly the norm for "moderation" on the internet).
All i can think, is under that distinction, the mccarthyism in the US would probably be considered "moderation" not "censorship" despite probably being one of the most egregious examples of censorship in usa in the modern era. So i have trouble accepting that definition.
HN feels mid-size in a good way. Most forums are a lot smaller, and then there are the few famous ones that are much much larger. There aren't that many in HN's order of magnitude. The mid range is a nice place to hang because although the problems are impossible, they're not utterly impossible. You can work with them around the margins.
So we get lots of local definitions of censorship and moderation depending on the flavour of views the writer wants to present. They all tend to be reasonable in context but mean everyone is talking passed one another.
Essentially everyone is trying to argue over the ground of what moderation should be so it doesn’t get lumped into the “evil” censorship. But because this is largely just opinion everyone tries to make theirs look more official and factual.
Don't trivialize it as some personal preference around moods. It's much more than that.
Stuff like death threats, doxxing, child porn, harassment are not just "moods you don't like".
I highly doubt it.
I’m pretty sure typical harassment comes in the form of many similar messages by many different users joining a bandwagon. Moderation wouldn’t really be fast enough to stop that; indeed, Twitter’s current moderation scheme isn’t fast enough to stop it. But the current scheme is capable of penalizing people after the fact, particularly the organizer(s) of the bandwagon, and that creates some level of deterrence. An opt-out moderation scheme would be less effective as a deterrent, since the type of political influencers that tend to be involved in these things could likely easily convince their followers to opt out.
That may be a cost worth paying for the sake of free speech. But don’t expect it to make the anti-harassment side happy.
That said, it’s not like that side can only tolerate (what this post terms as) censorship. On the contrary, they seem to like Mastodon and its federated model. I do suspect that approach would not work as well at higher scale - not in a technical sense, but in terms of the ability to set and enforce norms across servers. But that’s total speculation, and I haven’t even used Mastodon myself…
Even though it was easy to get around the firewall and perhaps even legal at the time, barely anyone was. People weren't using proxies to know about Tank Man when China was arguably only "moderating". People called it censorship.
I can't find the original article, maybe Wired?
[edit] Might be this one - https://www.theatlantic.com/magazine/archive/2008/03/the-con...
This is just to point out the naivety of using it's possible but it's not default is not 'censorship' when talking China. Viewing shadow banned is a ok middle on HN, the obvious hole is you have to have a login. I don't want gore on TikTok but I do want guns. It's very complex.
Your speech is effectively censored by the moderators because you cannot use that tool as intended, to reach audiences with the same ease other types of speech can.
It's like requiring a newspaper to "moderate" all articles by a certain author publishing them in the form of a short notice "If you are interested in the writings of Mr. Smith, who may or may have not published an article for today's issue, then please send a self-addressed enevelope to PO-Box ....".
Platforms will always be judged by journalists based on their lowest common denominator of users. Journalists will purposefully turn off all filters, find the worst comments, and say X platform is all like that bad person
This behaviour is then happily used by politicians to push platform owners in certain directions, possibly with the threat of regulation.
It also might mean coordinated leaving of advertisers, killing the platform.
So while in theory this article is right, there’s external power games that really play into the free speech issue.
Shine on you crazy diamonds!
Well, in very small doses numbers help. It is easier for a small group to watch the blind spots of each member. As the numbers scale up to serious group sizes things seem to fall apart again as a hive-mind forms.
Which means that the sensible thing to do is to form a committee of intelligent people with good incentives, then go trustingly with what they suggest. Which is, coincidentally, a successful model that governments use. All the politics is generally a distraction from the real work being done by committees.
I agree, but that's not self-awareness—that's seeing other people's blind spots, which is much easier, in fact it happens automatically.
You're right that it falls apart at scale. Somehow mass blind spots take over. Can that be mitigated? That is the question.
Group dynamics seem to change qualitatively at each order of magnitude. Maybe the problem of "social media", i.e. internet group dynamics, is that we're dealing with orders of magnitude we've never seen before. That doesn't get worked out in just 10 or 20 years.
Freedom of speech is a right that concerns you, a citizen, and public authorities. Censorship, in return, is when that right is interfered with by [the public authority] blocking your speech.
Moderation is when a [private entity] is blocking your speech. There is no public right that is interfered with in that case. You have the right to say what you want without public authorities interfering, but you don't have the right to say what you want in my house.
(Note: this definition is different from the one used in the article)
Whether or not Twitter is infrastructure, and therefore moderation equals censorship would be different debate.
e.g. https://news.ycombinator.com/item?id=32416424
So as a result they are practically the same thing.
Moderation is far more than that. Moderation depends on context - for example, deleting a comment like "$political-party is better than their opponents" from r/programming is "against freedom of speech", but is an example of good moderation, because political discourse is off topic on that forum.
Moderation is about setting the tone and scope of discussion. For many kinds of forums, this includes deleting comments that the moderators find outrageous, silly, inflammatory, and off-topic. Removing things that don't conform to their own views is however a faux pas, moderators are supposed to be neutral in the on-topic discussions, as their name suggests. False/fake are a more complex discussion, as there is no universal source of truth of course.
Now, for a completely open forum, such as Twitter or Facebook, moderation doesn't really make sense, since no discussion or tone is off-topic a priori (except of course for removing illegal speech).
Your link is actually great for his point: people in power (OnlyFans and Meta) blocked something that both sides (their competitors, and their competitors' users) wanted to hear - otherwise they wouldn't have needed to block it. (For the sake of argument, I'm assuming that the lawsuit's claims are true - I have no idea if they are but it's not pertinent to this point.)
But freedom of speech is not neatly defined by (the negation of) the definition of censorship, or moderation, either the one from the dictionary or the one from the blog post. It's a term that (in the USA) is defined by law and jurisprudence, and is open to some debate, and in other places is just missing and use losely in debates about reform.
If the author wants to use his/her definitions to state a position in one of those debates, fine, but say so.
(Me personally, I think it should not be allowed for individuals to own the wealth of small nations, much less command it. But that's a different topic)
How many people like getting criticism of any form? Should we ban all criticism?
In reality, freedom of speech is a principle that is more or less well-defined, and that is more or less codified into law in certain countries (the first ammendment to the US Constitution being the most famous example, but many countries have similar, though more limited, rights).
Viewed as a principle, it not only applies to the relationship between the individual and the government, it can be applied to all human groups. We can say that WeChat is worse for freedom of speech than Facebook, even though both are private enterprises and are not within the scope of any freedom of speech laws in most jurisdictions.
The reasons why freedom of speech is viewed as a virtue, at least in European-inspired thought, is not exclusively related to the relationship between the citizen and the state - it is about ensuring good ideas are heard even if that means bad ones are heard as well, ensuring that unpopular bad ideas of powerful people (within some group) can be challenged by the majority of the group, ensuring that minorities who are harmed by some decision get a chance to let everyone else in the group know about the harm.
These apply just as much to you vs the state as they apply to you vs your local church, your village, your tribe, your gaming clan, your company etc. For various reasons, each of these groups may decide that these reasons are not as important as others, while still wishing for some amount of freedom of speech (for example, a church will often not tolerate obvious blasphemy, but may still tolerate criticism of the church leaders, or vigorous discussion of the implications of scripture).
So, I am well within my rights to complain that my state or my company or my church or HN doesn't encourage freedom of speech enough, even though none of these institutions is bound by the freedom of speech clause of the US constitution. Also, I can even claim that the US constitution itself, or the SC interpretation of it, doesn't respect freedom of speech enough if I were to disagree with any decision on the matter - the principle of freedom of speech is separate from the US law.
I can't help but think this comes down to, its censorship if i disagree, moderation when i agree.
If someone posts something like "Elon Musk is an idiot", would deleting it be censorship or moderation? Musk is a person after all, but i suspect that many people would say deleting such a thing is censorship.
To give a more realistic to hacker news example - if there was a story about some company reselling modified GPL software for profit without following the gpl license, i would probably call them "bad" people. This is clearly an insult. I still think it would be a reasonable comment to make (hopefully a bit more fleshed out then just calling them evil of course).
There needs to be some way of dealing with this that respects the rights of the person who is being talked about, and that has to involve some censorship.
I have no idea if this community would be improved with less moderation because I'm blind to most of its results.
Our concepts of free speech, censorship and moderation are simply outdated on modern social media - when you have systems designed to encourage and spread low-effort, novel, emotional and manipulative content (e.g. twitter), no amount of "tweaks" to such systems can fix the problem.
Instead of trying to fix systems originally designed for marketing, why not actually design systems meant for disseminating and checking information from the ground up? I bet that would look way different compared to twitter or facebook.
It doesn't have to involve moderation or censorship - it could just mean giving disproporitonately more powerful voice to experts willing to explain disinformation, for example (rather than having their voice drown in the retweet popularity contest).
Nobody argues that owners of physical premises should choose between accepting full responsibility for every action on their premises or being utterly powerless to eject any person for any reason.
There are many situations where a post being visible to anyone is harmful to someone else. We can rationally weigh the value of freedom of speech versus expected harm to that individual and come up on both ends, but we can't ignore this simple fact when discussing this issue.
The only example mentioned in the post is child pornography, but there are more others: revenge porn, doxxing, smear campaigns to name a few. If my ex wants to send an explicit video of me, and someone else wants to view it, I am still harmed by the fact that the platform makes this possible, even if my moderation filters don't show it to me. Similarly, if someone is spreading my address or saying I eat children, the fact that I can choose not to see this doesn't protect me from the consequences of others reading these messages.
Again, not saying that I believe it obvious that such messages must be removed. My only point is that the "solution" that Scott proposes for avoiding harassment is a partial solution at best, but more realistically, entirely useless. It basically only helps for very low level harassment, such as not wanting to see someone cuss.
Free speech, moderation, editing, censorship, propaganda, and such do not have clear definitions. The terms have a history. Social media is new, and most of the nuance needs to be invented/debated. There aren't a priori definitions.
This article is defining censorship as X and moderation as Y... Actually, it provides 2 unrelated definitions.
Definition 1 seems to be that moderation is "normal business activity" and censorship is "abnormal, people-in-power activity" on behalf of "3rd parties," mostly governments.
Definition 2, the article's "moderation MVP" implies that opt-out filters represent "moderation" while outright content removal is, presumably, censorship.
IMO this is completely ridiculous, especially the China example. China's censorship already does, work like this article's "moderation MVP". Internet users can, with some additional effort, view "banned content" by using a VPN. In practice, most people use the default, firewalled internet most of the time.
Youtube's censorship is, similarly, built of the same stuff. Content can be age-gated, demonetized or buried. Sure, there is some space between banned and penalized... but no one is going to see it and posting it is bad for your youtuber career to post it. This discourages most of it.
IMO, the difference between censorship and moderation is power, and power alone. A small web forum can do whatever it likes and it's moderation. If a government, medium monopoly, cartel, cabal or whatnot do it.... it is censorship. If a book is banned from a book stall, that's moderation. If it is banned from amazon... that's censorship.
If amazon have a settings toggle where you can unhide banned books does not change anything that matters. A book that amazon won't sell is a book that probably won't be printed in the first place. That's how censorship actually works. It's not just about filtering bad content. It's about disincentivizing it's existence entirely. Toggles work just fine for that.
Evil is more nuanced and resourceful than that.
Posts on Slashdot have a score between -1 and +5. They can be modded up or down by randomly selected average users who get 5, 10 or 15 'karma' to apply. Karma is applied along with the set of negative or positive reasons (flamebait, troll, insightful, informative). You can tune the slider to show you posts rated in any range between -1 and +5, and apply special mods to certain reasons (e.g. make 'troll' mods drag things down more, or either get all the funny posts or push them down) to customize this further. All posts are visible if you browse at -1. Logged in users default to a score of 1, or 2 if they have a history of positive contributions and elect to check the box to add 1 to their score. Anonymous users (who are now required to be logged in as well, but who select 'post anonymously) start at 0 always. It's possible to have your account fall hard enough that your posts start at -1 as well.
I believe they said they'd only ever actually removed a handful of posts due to some lawsuit or another (might've been that 09 F9 11 02 key? I don't remember any more).
There's also meta-moderation, where anyone can vote on whether the mods that were applied are fair or not.
I personally opt in to see all the flagged/dead comments. I would say 1-5% of them are mysteries as to why they are dead (meaning they have been automatically killed, not killed due to other users flagging).
But then the ones that deserve to be killed/banned are so, so egregious.
It’s just unfortunate that there will always be innocent people that get roped in with automatic moderation.
Russians looking top-down on Caucasians (from Caucasus, not in the American sense of the word) or Central Asians is a thing, but there are nuances, e.g. the relationship with Chechens.
Of course the real problem is that advertisers would probably bail if twitter had showdead, but maybe twitter can solve that problem with the new twitter $8 thing.
Content A is filtered-by-default, demonetized or otherwise discouraged and distribution-suppressed. Content B is not. That's effectively how all propaganda works. Even a totalitarian state can't really prevent access to content. They make inconvenient to access, and unwise to produce. That's enough.
In fact, modern propagandists intentionally leave a "steam valve." China isn't that worried about shutting down VPNs or whatnot. Firewalled-by-default is enough and overdoing it can be counterproductive.
I think the solution is awful. It doesn't work at all for the most important cases. If someone is being abused, libeled & harassed by a belligerent ex... hiding revenge porn behind and "adult content" filter isn't good enough.
If a platform is filtering political content, the fact that it can, technically, be accessed by enabling "harmful content" does not make it less censorious.
Some of these are people who have been shadowbanned, you can check by looking at all their comments in their profile and if they're shadowbanned (almost) all of them will be dead.
>There needs to be some way of dealing with this that respects the rights of the person who is being talked about, and that has to involve some censorship.
IANAL, but isn't exchanging CSAM, non-consensual adult porn and snuff videos, in fact criminal action in most jurisdictions?
If that's true, legal action can be taken against those involved.
As for libel (let's call it defamation to make it more inclusive), there are legal avenues (civil litigation and, in very rare cases, criminal charges) which can be pursued there too.
Are those avenues insufficient in your view? If so, what would you suggest, other than the current legal regime, in such situations?
Like very seriously, I am not saying Russia has Aryan supremacists. But it has strong enough and big enough groups that are so similar in everything except who gets to be on top, that "white supremacy" is perfectly good descriptor in English speaking forum. And neonazi fits perfectly their group behavior. Down to similar symbolism and preferred music.
I'm not an US citizen, and it applies to where I live as well, so there's that.
Within your framework, I agree with your views and conclusion though. My post was intentionally targeted as legalistic point of view, but I agree that this can be generalized.
But you can still stand outside your home and say whatever you want. You can print up and distribute flyers. You can set up your own web site too.
While I despise the business models and actions of pretty much all the "social" media actors, they are not required to provide you with a platform.
You can still say whatever it is you want to say, but those private actors have no responsibility to act as a megaphone for you.
Hilariously enough, this is what people said about the invention of both writing and the printing press.
"A computer can understand the content as well as any person."
As this will be true in ~3 years. In that light, things are both better and more scary.
With classical media agencies you have a bigger chance at guessing their political views and intentions. Whether you like them or not is up to you. Most of these media agencies also take responsibility for the release of a piece of news.
This responsibility is missing with "individuals" who post on social media, they vouch for nothing and can only lose their account.
The right to post to an unlimited amount of people should be earned in my opinion. Right now the algorithms of the social media platforms even encourage controversial or aggressive posts which is the worst that can happen.
I agree with this, but only because I have so little faith in people.
>> Give me the tools that the moderators have
Whatever tools a site like twitter or youtube gives you, (A) most people will never use them and (B) they still control how the tools work. These two are enough to achieve any censorship goal you might have, and enough to make censorship inevitable.
I don't think we get power to the people while Alphabet/Elon/Whatnot own the platform. It's a shame that FOSS failed on these fronts. But, the internet has produced powerful proofs of concept. The WWW itself, for the first 20 years. Wikipedia. Linux/gnu. Those really did give power to the people, and you can see how much better they were at dealing with censorship, disinformation and other 2020s infopolitics.
That is the wrong view on a global communication platform. It's like saying "a certain tone sets the mood for the entire telephone system".
These things should be seen more as silos, subcultures or whatever.
Unless you expose yourself to the firehose of globally popular content.
YouTube is in fact a great example of this being a real problem - they infamously chose to reduce the visibility of non-mainstream news channels, drastically cutting their viewership while not removing a single video from their platform. They also often de-monetize videos for even mentioning certain words or subjects, greatly dis-incentivizing anyone from discussing them (e.g. rape can't be discussed on YouTube if you want to make any money from your video).
Twitter (let’s face it we’re talking about Twitter) is not the world. It’s certainly a popular place for people to yell at each other and increase the general level of aggravation in the world. But it isn’t the world. If someone is moderated off twitter, their ability to speak is impacted, but only to one audience and in one way. Their ability to speak to me is unaffected entirely because I think Twitter is a giant waste of everyone’s time and energy. They can speak elsewhere, other platforms can serve their needs, and if they are popular enough then they’ll take the users and the attention from Twitter. Regulation here would only entrench the platform.
Twitter it not the public square, it’s some private company’s arguing arena.
The point of debate here is how to divide moderation from censorship.
I'd argue that size and power matter most. How you moderate is a technicality. It makes the difference between good and bad moderation, but it doesn't make the difference between moderation and censorship. This article's tips might make your moderation better. They will not make censorship into moderation.
HN's moderation is moderation because HN isn't a medium monopoly like meta, twitter or alphabet. If HN's moderation, intentionally or incidentally, suppresses negative opinions about tensorflow... that's still not censorship. It might be biased moderation, but the web is big and local biases are OK.
It's OK to have a newspaper, webforum or whatnot that supports the christian democrats and ridicules socialists. It's not OK if all the newspapers must do this. That's twitters problem. "Moderation" applies to the medium as a whole.
Anarchy does not want "no rules" it wants "no rulers."
I agree that moderation is necessary. That does not mean that "moderation" on youtube is not censorship. Both can be true. Maybe we can't have free speech, medium monopolies and a pleasant user experience. One has to give.
When you talk about censorship, you must also talk about freedom of speech which usually is a well-defined legal term in countries that supports it. On the other hand, moderation is a form of management. Depends on why and how the action is carried out, moderation could be better or worse than censorship.
And,
> If the Chinese government couldn’t censor - only moderate
This is exactly how most censorship is implemented in China, through moderation. See the funny part there?
The point is, I don't think the word Moderation must inherently better than Censorship, it all depends on the why and hows.
Like take a protest (for whatever issue you want). Is that criticism or harrasment? Or both?
As a former die hard member of [R]age Board for the Elites, I remember the use of the N-word being so prolific a moderator changed it to display “ice cream” with no work around. Seeing a post suddenly pop up calling somebody a stupid ice cream was hilarious.
I was a teenager in the wild days and Chan doesn’t disturb me even, I just don’t go there. Borderland Beat is real enough.
As an adult I feel that the ban hammer is an absolute necessity.
You know it’s a weak article from this strawman. The author could have addressed the ‘digital town square’ that was directly listed as the reason for Elon intervening in Twitter but has deliberately chosen not to.
If there's no distinction to be drawn because you're asserting that moderation means that something different in China, I think that is you rather than the author that's using terms in non-standard ways.
Moderation is about how things are said.
They are not mutually exclusive.
we demand reasonable levels of due diligence from owners of private businesses when criminal activity is concerned. If you run a business that sells stolen goods, someone runs a drug ring out of your restaurant or you serve alcohol to minors you have a big problem.
This is so because law enforcement can only ever act after the fact and would of course be completely overburdened if every private actor was willfully ignorant of what goes on in their establishments. Not to mention that this is also to our benefit because without that level of civic involvement as a first line of defense the logical conclusion is a police/legal state involved in every transaction. Which is literally what you see in countries with weak civil societies but big tech firms. if neither the people nor business owners take responsibility, who is left?
So let's look at what happened in reality. Almost immediately sub-reddits pop up that are at the very least attempting to skirt the law, and often directly breaching the law- popular topics on reddit included creative interpretations of the age of consent for example, or indeed the requirement for consent at all. Oh and because anyone can create one these communities, the site turns into whack-a-mole.
The second thing that happened was communities popped up pretty much for the sole purpose of harassing's other communities. But enabling this sort of market place of moderation, you are providing a mechanism for a group of people to organize a way to attack your own platform. So now you have to step back in and we're back to censorship.
I also think that this article completely mischaracterizes what the free speech side of the debate want.
Meh. It's full of loaded terms. "Abnormal": if you engage in it regularly, then it's normal, not abnormal. "People in power": you mean like moderators?
Moderation is a form of censorship. Is moderation good? Well, it's a question of degree. Some moderators become gatekeeping Nazis, just as some posters become raving lunatics. So it's a question of finding a BALANCE between freedom of expression and letting the clowns run amok.
Anyway, analogies are imperfect, please look in the direction where I am gesturing, not at my exact words.
The point here (and of the entire conversation) is that you shouldn't judge a medium by its worst imaginable actors as long as you're given the right tools that allow you to use that medium undisturbed, effectively putting them into a different silo. Today twitter allows a very crude, imperfect approximation of this by following people that post decent content and setting the homepage to "latest posts" instead of "top tweets". Ideally we'd have better tools than that.
If hypothetically every metaphorical YouTube should close for business because perhaps governments shut down ad funding, or if YouTube starts charging money, is my prerogative of speech in peril?
And if AT&T and all the other phone companies converge on the position that I must pay big bucks to talk to people, is that censorship? It’s not like I can easily find a free version of AT&T.
If I am enormously underpowered that I cannot bid for speaking time on TV, is that censorship? I’m basically an incompetent David bidding against Goliaths.
...and just like in a government censorship context, ambiguity and fear do a lot of the work. Rules are not clearly defined or consistently enforced. You don't necessarily even know that you are being disciplined. It's best to just stay far away from controversial material entirely.
On youtube, it's had the curious side-effect of specialization.
It's not worth occasionally discussing political content, social issues or controversial content. A youtuber risks harming their income/success by taking a wrong step. Meanwhile, you kind of need to be specialized in order to know where the lines are.
For example, the Ukraine/war content youtube allows, bans or demonetizes currently is not the same as it was 3 or 8 months ago. The rules aren't written, and you need to be immersed and current to even guess what they are.
Same for sexual violence, Trump or any other highly contested moderation issue. You really need to be a specialist to (a) stay within moderation lines and (b) be worth the risk.
I disagree wholeheartedly. These concepts are now more important than ever in human history.
(the "cancel" message was hilarious since when invented it was unauthenticated, i.e. anyone could delete any post on any group in USENET! This had to be fixed:
https://www.templetons.com/usenet-format/cancel.html )
Also found https://www.gdargaud.net/Hack/NoSpam.html which is a great little time capsule site..
No because it's not related to your specific content.
Even if you were to make the argument that X content doesn't make money and costs too much, if someone pulls the trigger without giving you recourse to resolve issues, then it is violation of free speech.
In your example, if AT&T tells you that you must pay more money to say certain topics, then it's a violation.
If it costs money, it costs money, nothing wrong with that. The issue is intent.
Social media is new. The "right" to broadcast was almost theoretical before the internet. It wasn't what free speech was about.
IMO, we don't have free speech at all on fb/youtube/etc... currently They can close your account and take away your right. They don't need a court and it's all up to them. You have individual speech on those sites.
Contextual filters/scanners would score a piece of content, give it a "score" based on what ever categorizations are being filtered (NSFW, Non-Inclusive Lang, Slurs, Disinfo, etc)
Then both the creator and the consumer should be able to see the score in transparent manner, with the consumer being able to set a threshold to filter out any post that is higher then what they choose
Free Speech Absolutist could set it to 0, Default could be 50, and go from there
Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?
I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".
That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them
Twitter is popular with journalists, politicians and such. Hence all the attention. For most people, facebook and youtube are the important part.
IMO, youtube is the most important medium today. It's effectively the free-to-air TV of the internet. It has a terrible, clunky, disrespectful and illiberal approach to content moderation. In fact, it's pretty similar to state censorship methods... ambiguous rules, selective enforcement, whipping boys. Makes Twitter look good.
While I am sure that is true in some circumstances, I believe that is less common than my original statement
>People who don't like moderation on Twitter can go off to gab, gettr, telegram, Truth, 4chan or a tonne of other venues.
All of those sites have various moderation rules, the key difference here is the people that control the moderation are likely of a different political leaning to you so you view them as being "unmoderated" because you do not like the content that is allowed there.
Gab as an example started out as a "free speech" platform, but now has pretty intensive moderation rules especially around Adult content. This cost them alot of good will from Free Speech Absolutists, and libertarians.
>>The person shouting slurs at AOC on twitter isn't satisfied with calling AOC names if they don't think she will see it.
AOC is somewhat of a different case, without addressing your red herring of slurs. AOC is an elected official, as such the bar should be set higher for elected official in that they have an ethical obligation to hear from the people they represent.
The reason why moderation exists on sites like Twitter / Facebook is due to: 1. Laws (e.g. child porn, abuse, harassment, illegal speech like Tiananmen square in China). 2. Advertisers - the real customers. 3. Public opinion
In that order, with a pretty big gap between #2 and #3. Don't comply with laws? Out of business tomorrow. Don't do what advertisers want? Out of business this year. Don't do what the general public wants? Maybe out of business in a few years, maybe not, it depends.
The methods proposed in this post are great for dealing with issues around public opinion, but do very little to appease governments or advertisers.
Moderators were initially tasked with keeping threads on topic, enforcing predefined community standards, and parsing irrelevant detractors.
Dark patterns are now mostly just used to manipulate people, and attenuate conduct to fall into line with group think biases. This policy drives up engagement, traffic, and profits.
Most truly smart people I met, were often rather prickly characters more concerned with data than being popular. ;)
I'll take your word for it that none of your opinions or ideas were controversial enough to upset your publisher, but do you feel the editorial process would have been completely uninterested in removing opinions or other ideas before publication if your draft contained frequent asides praising the Third Reich or suggesting that the practice of software development would be greatly improved by only allowing men to participate in it?
It’s not censorship unless the government itself is doing the censorship and making people face criminal consequences for disobeying.
If a private entity is doing it, even at the request of a government, it’s not censorship or a violation of free speech unless they were going to face legal consequences for ignoring the government’s request.
That is a rather naive slogan to be repeating in 2022.
Once your servers become the de facto public square, we absolutely get to complain. Not even talking about how your server is running on top of a huge amount of infrastructure that was created by our society, enabled by principles and laws that have been discovered and refined across generations. Your server does not exist in a vacuum.
Democracy requires a healthy public square to survive and thrive, and that is more important than some overly simplistic notion of private property.
Moderation is a special case/form of censorship. In many cases, it's a desired or willful filter as the article suggests, but it is censorship of information.
Censorship doesn't have to be forced, it can be agreed to but it's still censorship. Rebranding things to look fuzzy and give positive perception doesn't change the underlying principle.
Manipulation of information be it omission, selection picking, burying in piles of noise, etc are all manipulative tactics most of which are used to follow the spirit/intent of censorship. It happens in restricted environments like China but also happens in less restrictive environments like the US, the method of approach simply changes around what's legal and possible. One could argue censorship approaches in free speech environments are the most resilient because they rely less on the difficult tight controls of information flow nation states like China leverage.
You are free to say what you want without going to jail, physical punishment, or fines (unless your speech is part of committing some other criminal behavior—such as fraud—or civil tort).
But nobody is obligated to provide you the means of distributing your speech.
Thats not to say that there aren’t asymmetric means of disseminating ideas or messages over third-party distribution channels. But you’ve got to be savvy enough to do that, or separately powerful enough to buy your own distribution.
Even owning distribution is pointless if you can’t communicate your ideas in a way that attracts listeners. “Right to Speak” doesn’t equal “Right to be heard”.
Yes, they're zealots. It's similar to how religious zealots refused to simply "change the channel" when they found something objectionable.
There's a solution for this, based on prediction markets. Essentially experts make "bets" on various things and are rewarded for correct predictions. The more correct predictions they make, they more "points" they have to get their viewpoints broadcast. And conversely, quacks and charlatans that cannot model the world scientifically make few accurate predictions and get drowned out.
instead i first thought of HN’s “showdead”
The discussion here is about censorship. Infringing the first amendment is censorship, but the converse isn't true. Plenty of things are censorship without infringing on the first amendment.
As an example, Bezos hypothetically preventing his newspaper from publishing negative stories about him is censorship. He is censoring his editors in this hypothetical.
As another example, if Bezos says "everyone who calls me a stupid-head will get kicked of AWS", that would be an attack on free-speech.
Technique. Silence them using any tool available.
Justification. Whatever lie I can spin so if doesn’t seem like censorship.
Eventual goal. Censorship so complete that realistic justification is not needed.
The USA has privatized its public commons, with exception of a library and city hall.
Twitter, Facebook, etc are the 21st c USA public commons. It's where the people are. It's where the local politicians are.
The downside: it's owned by corporate privateers who extract wealth from dissent.
Have a wonderful day, and here is an up-vote... lol... =)
(This is for anything with a political slant to it, I still find it useful for niche subjects, say mycology)
Almost all other arguments over moderation v. censorship are a derivative of the most fundamental freedom, freedom to choose, control over one’s own life. That natural human right simply does not exist in America (and moist western places) anymore, effectively all the “civil rights amendments” have done the opposite off their states objectives, they have institutionalized federal government enslavement, total domination off federal government control over your life. You no longer get to chose shoo you associate with, you’re slave master federal government decides on who you are allowed to associate with. You are as free as you are permitted.
All this other debate of moderation and censorship is meaningless noise, merely beating around the bush to discuss rearranging the deck chairs on the titanic.
And for all our foreign friends, whether they are in America acting as if they are American or outside of America, all of these matters related to the Constitution are very relevant to you too, whether you understand it or not, because all the freedom you have and think you have is a direct derivative of the founders of the USA creating the Constitution. Most people have just taken it so for granted or it is all so abstract that they do not understand any of it, because not even a person of European background can be American without understanding these things properly, let alone someone without European ethic and cultural background.
That may offend people hearing it, but it does not make any of it less true. In fact it is the “moderation” that is inherent censorship, which even prevents the system from self-correcting, i.e., moderating, because it is really just perversion, i.e., distortion, being called moderation.
There would be nothing to discuss about this topic if the world dominating tyrannical evil of forced association did not exist.
My apologies.
> So you want a moderator to moderate.
I don't care whether they continue to moderate centrally but it would suit those who do.
> but then you also want to have tools
Yes.
> to see what has been moderated away and unlock those?
Yes.
If an app you download has settings but they are either:
a) only available to the developers or company
b) the defaults always override your settings
would you be happy? Why, you might ask, do you not get access to the settings and to set them as you wish?
They tried getting rid of that in Voat, and it was such a cesspool that nobody sane used it, and the owner couldn't keep it up and shut it down. /r/TheDonald at one point tried to migrate after whining about Reddit's moderation and came crawling back because they couldn't stomach it.
Yeah, Reddit's moderation system is far from ideal, but we've seen experimentally that it's definitely better than not having it.
I just don't think it's a reasonable position, no one has an ethical obligation to make themselves endure racist and misogynist abuse. And you might call it a red herring, but there's overwhelming evidence that that's what AOC is exposed to under the system you advocate.
Free speech is a right to speak, not a right to insist other people listen.
Yes it's popular and yes there's a lot of people on it and using it, but that doesn't make it a public commons, its ownership does.
Precisely, It's like the author never understood the original definitions, but think their interpretation of the world creates them anew. It's a dictionary, not the bible.
Moderation as "we modulate other people's behaviors for you and your feelings" is justifying the act of censorship using other terms. These rationalists aren't half as smart as they think they are, or they wouldn't need so many words and novel interpretations.
Scores across a range of measures would be best, in my view.
Almost all other arguments over moderation v. censorship are a derivative of the most fundamental freedom, freedom to choose, control over one’s own life. That natural human right simply does not exist in America (and moist western places) anymore, effectively all the “civil rights amendments” have done the opposite of their stated objectives, they have institutionalized federal government enslavement, total domination of federal government control over your life. You no longer get to choose who you associate with, your slave master federal government decides on who you are allowed to associate with. You are as free as you are permitted; an inherent contradiction.
All this other debate of moderation and censorship is meaningless noise, merely beating around the bush to discuss rearranging the deck chairs on the titanic.
And for all our foreign friends, whether they are in America acting as if they are American or outside of America, all of these matters related to the US Constitution are very relevant to you too, whether you understand it or not, because all the freedom you have and think you have is a direct derivative of the founders of the USA creating the Constitution and declaring themselves free of the slavery of monarchy. Most people have just taken things so for granted or it is all so abstract that they do not understand any of it, because not even a person of European background can be American without understanding these things properly, let alone someone without European ethic, historical, and cultural background.
That may offend people hearing it, but it does not make any of it less true. In fact it is the “moderation” that is inherent censorship, which even prevents the system from self-correcting, i.e., moderating, because it is really just perversion, i.e., distortion, being called moderation.
There would be nothing to discuss about this topic if the world dominating tyrannical evil of forced association did not exist. So I propose we address that instead unless there point is just making noises as the titanic sinks and we are heading back towards what is effectively neo-monarchy.
You didn't explain why Americans don't have the right of free association
This censorship is what forced conservatives to build new platforms. In so doing they discovered far greater censorship. Google, Apple, and Amazon all responding within the day to deplatform parler? The only entity large enough to make those 3 entities jump within a day is the US government.
The "Hell", "nightmare", or "disaster" everyone is complaining about is the US government is who has been censoring speech. The reason there's a large delay on unbanning obvious accounts like babylon bee... twitter/elon can't do it.
If I run a sci-fi bookstore, and I choose not to stock your book about political philosophy, is that censorship?
If it write an article that reviews your book (wherein, necessarily, I pick and choose what parts of your book I talk about, and also paraphrase [is that the same as “manipulating information”]), is that censorship?
When there is simply too much information for any person to consume, and even too much to be able to _evaluate whether To consume_, what does _not having censorship_ look like?
The curtailments on freedom of association are very narrow and focused on specific constraints.
But this country was founded on an understanding that there are some fundamental freedoms of choice that aren't there if you want to have a functioning society work together. And some of the ones that the founders believed would be acceptable we fought a bloody war to remove. The freedom to choose is inalienable but (much as we still have a right to liberty and jails at the same time) inalienable rights can be curtailed in the name of having a functioning society.
This is actually exactly how big media/big politics operates.
Fresh ideas are always welcome, but the people who are trying to maintain working forums have been at the process for a long time now and can draw on experience all the way back to the BBS days.
I'm more sympathetic to censorship than the author--I've seen what e.g. vaccine misinformation can do to radicalize the average person.
One valuable tool the author doesn't spend much time on is defaults. We can have a heavily moderated, uncensored platform which still prevents mis/disinformation for the vast majority of users. E.g. HN hides all sorts of nonsense by default, but still allows you to reveal it with showdead. The same folks who are prone to disinformation are often too unsophisticated to dig into their settings, and having only a single "showdead" filter means you not only see your favorite Lizard People posts, you also get inundated with all the other nonsense.
Yes. You're being censored in this case by the will of markets or perception of the will of markets (consumers at large), less so by the store owner due to systemic constraints they must operate in. Markets indirectly represent the will of mass consumers. There's a reason we have minority protections in government and chose a republic structure over pure democracy, to prevent oppression of the voices of the few by the masses.
>If I run a sci-fi bookstore, and I choose not to stock your book about political philosophy, is that censorship?
If you intentionally chose to ommit the book and it wasn't due to chance omission, perhaps because you hate the author, then yes, it's censorship.
>If it write an article that reviews your book (wherein, necessarily, I pick and choose what parts of your book I talk about, and also paraphrase [is that the same as “manipulating information”]), is that censorship?
It depends on how you choose that information and present it. Is it a representative sample of the book or are you intentionally cherry picking pieces of information, especially out of context, to represent a preconceived opinion you want to portray and not an actual summary? If so, the yes, it's censorship. If not, then no, it's not censorship.
I agree there are logistical constraints that makes reductionism a requirement. The key differences in all of these cases is intent. It's difficult to prove but the question isn't if you had to reduce information for logistic purposes but how and why you chose what to reduce. Did you reduce information for your advantage? Then chances are, it's censorship.
This kind of comment always runs afoul of reality. Moderated discussion is the rule and not the exception because it works. It results in more desirable content that attracts more users. Parler and Gab and Truth[1] couldn't beat Twitter; 4chan couldn't beat reddit, 8kun couldn't even beat 4chan. Going back farther, USENET was fundamentally unmoderatable by design, and it drowned itself in a torrent of spam.
The less moderation, the less utility. Everywhere.
I’ve also seen a ton of cases where people expressed disagreement or contrarian positions but did so in a respectful and fact-aware manner and had positive interactions because they were respectful of the community.
Twitter ? It's far from a public square, even in the US (outside of the US it barely exists)
Also, if an online public square is a pre requisite for democracy: it should be a public utility, not something owned by a company who's incentive are diametrically opposed to the interests of the users
It's a great concept, though it's worth pointing out that there's considerable overlap of moderators between subreddits (a.k.a. powermods).
In effect, you end up with a single system applied across hundreds of subreddits which may-or-may-not be appropriate, and if you happen to earn the ire of a powermod you find yourself banned from all the subreddits they moderate.
On second thought.. I suppose that's Tiktok.
https://www.reddit.com/r/OutOfTheLoop/comments/c5urdn/what_i...
Apparently /r/TheDonald was very used to being in a safe space. Voat didn't cater to that, and TheDonald couldn't take that so eventually they returned to Reddit.
This was before their separate website.
I'm sorry but you do not know what you are talking about.
In addition crap floods? If I submit half a billion posts do you really want that handled by moderation?
Being a server operator I've seen how bad the internet actually sucks, this may be something the user base does not have to experience directly. When 99.9% of incoming attempts are dropped or ban listed you start to learn how big the problem can be.
To simplify it for you, you inherently cannot have the right of free association if there is no means to freely associate, because the ability to do so has been taken from you by force of perverse law.
I find it quite curious to live in a world where people do not understand that they are slaves, probably due to the fact that they have been conditioned to understand slavery as only being possible when there are chains and “black” skin involved, i.e., conditioning. I have never met a single other person who really understands that slavery is a mostly mental conditioning, chains and related iconography are merely just that, icons or a symbolic representation that abstracts away what slavery really is.
Most slaves, even in the deepest South American jungle plantation never wore chains. Chains ares unnecessary once you have properly trained your slaves to their condition. That applies to western slaves that make up the majority of western nation countries, likely including you too, as well as the slaves all over the world producing things in farms and mines so higher level slaves (you?) can, e.g., feel virtuous and privileged by being obedient and rewarded and, e.g., by driving electric vehicles.
Slavery never ended, folks, it just pivoted the business model and if you are reading this here, you are just a more privileged slave, maybe even a slave master.
70k users migrating to Mastodon in 1 week is interesting. However, I do respect your opinions.
Have a wonderful day, and here is an up-vote... =)
Absolutely.
And as I understand it, many of those social media companies do a piss poor job in policing the kinds of criminal activity mentioned by GP.
That might be an area where targeted regulation could be useful.
But the larger discourse around moderation tends to be focused on political actors (both legitimate and otherwise -- I'm not going to get into a political discussion here, as it's tangential to my point and not likely to spark worthwhile interactions) and the slights they claim are disadvantaging them.
In my view, that's the wrong discussion. We should be much more focused on the very real criminal and tortious conduct that pretty much runs rampant on those platforms.
I voted with my feet a long time ago and don't give my attention to those sites, but that only helps me and doesn't address the larger issues.
As I mentioned in another (tangentially related) discussion[0]:
The best-case scenario in my mind would be more decentralization of
discussion forums. That gives us both the best and worst of both worlds:
Folks can express themselves freely in forums that are accepting of those
types of expression, while limiting the impact of mis/dis-information to
those who actively seek it out.
Which may well be a good idea in this domain as well. Smaller, more focused and decentralized forums are more likely to have decent moderation (as those involved actually have some interest in the topic(s) at hand) regimes, and those that cater to criminal activity are isolated (and both more difficult to find and more vulnerable to being taken down) from the majority of folks.It's not a good solution, but it's becoming clear that moderation of huge forums like Facebook/Instagram/Twitter/etc. isn't really practical.
If you accept that premise, what options (other than decentralization) could address these issues effectively?
Both moderation and censorship have the outcome of reducing what information parties are allowed to communicate, and a system is its inputs and outputs, thus they are the same thing.
This isn’t to say that it’s bad or good. The rhetoric of moderation is perceived better than censorship, but this is like the cops calling your interrogation an interview; it doesn’t actually change what’s happening.
Social media keep using this excuse for not trying. We can moderate spam in emails with a simple naive bayes classifier, why don't we just do that with comments? It could easily classify comments that are part of a bandwagon and flag them automaticly hiding them or for human review.
We are able to moderate email but the concepts we use to do so are never applied to comments, I don't know why, this seems like a solved problem.
The real threat and conflation of moderation and censorship is when centralized sites like reddit or facebook put a standardize layer that is enforced across topics and domains. When taken to the point of infrastructure removing site's abilities to exist (such as the stormfront having their domain suspended) then we've clearly veered into censorship. People can complain about moderation on a site or forum (I mean I have), but when the moderation is not contained to the forum, and the site seeks to supplant the distributed world wide web itself, then the line is an arbitrary one.
Wikipedia have a model for user generated content. It's much more resilient, open, unbiased and successful than social media. This isn't because they have some super nuanced, single-us distinction between moderation and censorship. They never really needed to split that hair.
They have a model for collaboratively editing an encyclopedia, including lots of details and special cases that deal with disagreement, discontent and ideological battlegrounds.
They also have a different organisational and power structure. Wikipedia doesn't exist to sell ads, or some other purpose above the creation of an encyclopedia. Users/editors have a lot of power. Things happen in the open.
Between those two, they've done much better than Alphabet/FB/Twitter/etc. Wikipedia is the ultimate prize for censorship, narrative wars, disinformation, campaigning, activism and such. Despite this, and despite far fewer resources it outperforms the commercial platforms. I don't think it's a coincidence.
Big claim...
Look at the stats, how can it be a democracy pre requisite and a public square when not even 10% of your country's population is on it ? (and probably 30% of these are bots, and another 30% are inactive)
It's an online bubble of polarised people looking for attention, not a public square and not representative of anything
Positive interactions are certainly possible and do happen, but the site is heavily heavily tilted towards groupthink. Fighting it is an uphill battle.
Historically the r/RedditRequest process only considered whether the moderator was completely inactive from Reddit. There could be a dead subreddit that hadn't been touched in years or a flourishing subreddit whose top mod was completely MIA, there was nothing you could do if the top mod was still active on Reddit — even if you could prove they were just squatting.
Not unlike domain squatting.
This experience as well as a rather low discussion level on Reddit made me resign from using it. Hard to find a replacement, however; I like to use Stack Exchange, as a very dry form of communication that focuses on merit.
No, it really isn't.
Differences:
1) Reddit is super ban happy, and there is no way to view banned content. Ban reasons include slurs, political opinions, as well as no reasons at all.
2) Subreddits are not filters over the same content, they have (mostly) different content.
3) There is a fractal abundance of user-moderated subreddit; yes, there is some bad culture in some of them. This is not what ACX is proposing. He is proposing 2-20 filters, ran by the company, not by volunteers, with a specific purpose and clearly defined.
I really don't see how ACX's proposal can cause illegal behavior or harassment that is not already there.
You're making a false equivalence with reddit, then pointing out reddit has negative emergent properties.
If moderation must be done then let me do it for myself. Give me the tools.
A central moderating authority cannot be trusted at all.
But still, if the Hunter Biden laptop story were removed from the Linux Kernel Mailing List, from StackOverflow and from LWN.net (entire platforms), I wouldn't accuse these particular platforms of censorship.
We aren’t even having a candid, good faith public conversation about all this.
Nothing in content moderation would necessitate banning public figures with heterodox views from your platform. Hell, that’s not even a requirement for censorship.
Banning is for trolls and bots and bad faith actors, which is definitionally never someone with >1million followers.
That choice is about publicly punishing someone and most importantly, distancing yourself personally from that person and whatever they said so you don’t get confronted at a cocktail party.
So yea, when you open up a site to all legal content, you immediately are flooded with people at the very edge or just over every law.
Similarly, when you ban Alex Jones, you pretty soon end up banning everyone who you disagree with.
Most of what is going on right now in social media isn’t moderation or censorship. It’s just being lame and awful and lacking principles and self awareness.
HN works because it is a tech forum and can ban religion/politics as it sees fit. We get lots of signal and filter out what we'd otherwise consider noise.
The issue is this doesn't work in generalist situations. Where my signal is your noise, or vice versa, people tend to do one of two things. Filter your noise, or increase their signal.
And thus goes back to the problem of giants. The noise battles we see will use every tool available to attempt to win, legal, political, or illegal. This is where splitting up the giants into smaller control zones with varied views tends to help with moderation.
My point is that unless something violates the first amendment, I’m ok with it. If Bezos kicks people he doesn’t like off of AWS, I’m ok with it. It’s not a public space. The owner of the private space makes the rules. Just like I can kick people out of my house for similar reasons.
Don’t like it? Host it yourself. If the government tries to censor your self hosted content, then I’ll get up in arms.
It all comes down to some guy telling me how to talk. I don't like it. Anybody who likes it has rocks in his head.
I don't disagree with your point, there's quite a bit of knowledge around building communities and moderation that's been around and honed for at least a generation. And we should take that knowledge and build on and around it.
That said, folks have been going on about "Eternal September" for decades. Granted, people are born all the time, but they've grown up in the age of the Internet.
As such, it seems to me that at some point (if not now, when?) we need to get away from that particular excuse.
Anyone born before the Internet (myself included) has had a long time to figure things out, and anyone born in the Internet's wake is immersed in it from a fairly young age.
So why do we continue to use "Eternal September" as a foil?
It's entirely possible I'm missing something important, and if I am, please do enlighten me. Thanks!
Just let individuals ban whoever they want from THEIR view.
If you want to be super-fancy, you could then see if some account X is banned by many of individual users from appearing on their feed, and give individuals an option to have those automatically banned from their own feeds after some threshold percentage.
So, if X is a jerk/spammer and individual many discussion group users have banned them (from their own view), give users the option to automagically have X banned from their own feed too once they hit say 10% of other members banning them.
This off-loads bannin a little, and as long as individual users have the option to check which those "auto-banned" are and e.g. except them from being auto-banned for them, it still maintains freedom.
In HN with showdead etc, I've never seen any "dead" comments that I couldn't just have as regular comments and just ignore on my own...
Look... IMO, these tend to go the wrong way from the first sentence. Almost any polemic on this topic starts by assuming or implying that Freedom of Speech means X or that censorship means Y.
The reality is that Freedom of Reach means something totally different than it did 25 years ago. Freedom of the Press and Freedom of Speech didn't use to be the same thing.
We can't keep going to the past and pretend that early republican politicians, early liberal philosophers or early modern lawyers have the answer for everything rights-related. It's ridiculous to extrapolate what Free Speech means in the era of TWitter and Youtube from the early modern era's thoughts on pony mail and leafleting.
What Freedoms we have, or should have, now that technology enables them, is a question for people of now to decide.
Users rarely deviate from the established upvote/downvote patterns. In fact, I'd go as far as saying many users don't even read the comments before voting.
When two users are having a heated argument, it's common for a third person to respond to the 'right' person with an innocuous comment and be heavily downvoted for it.
More than likely shadowbanned accounts. Some green accounts also seem to be automatically [dead] until they get couch’s, but I’m not sure the exact situations around that.
https://en.wikipedia.org/wiki/Brandolini%27s_law demands moderation. Community and society demand moderation. Hell, I'd even go as far and say physics demands it. The internet breaks our ideas of social norms on moderation by taking distance and anonymity and shoving them in the same place all at once. And much like if you take groups of conflicting fundamentalist religious groups and put them together, the enviable violence outbreak affects everyone around.
Twitter is already a whack-a-mole, but for a range of content that's much broader than just illegal content. A change like this would reduce their moderation burden.
> The second thing that happened was communities popped up pretty much for the sole purpose of harassing's other communities. But enabling this sort of market place of moderation, you are providing a mechanism for a group of people to organize a way to attack your own platform. So now you have to step back in and we're back to censorship.
You can ban harassing behaviour without banning open discussions.
Finally, I don't think the ACX proposal is exactly like reddit. Reddit still has moderation imposed by a third party, this moderation configuration is in your control.
What you want is someone else's audience, and I'm not exactly sure what makes you think you have the right to that?
Back in the day we might have a forum or a number of forums on a topic. Let's say it's Nascar Forums or whatever. I might not like the opinions of the moderators and that'd be that, I'd leave the site. I'd recognize that it's not reflective of the wider world.
Somehow Twitter and Reddit and various other social networks don't feel like that. I feel quite often that some subset people around me take opinions from Twitter etc. as being reflective of mainstream thought. When really it's still just a tiny microcosm of humanity.
I don't really use them any more, but I still have the sense that all sorts of social movements and bizarre (from my perspective) opinions and value frameworks are being born and spread there.
Wondering if the account is real or fake? Check the person's website, wikipedia page, look if there are more accounts with this name.
People are willfully disinformed. Do you trust information just because it's being liked many times? Well, being popular doesn't mean it's right or healthy.
In Russia many people don't think their government is doing anything wrong. Why? TV says so. Their coworkers do. But you have Youtube, Telegram. Most media who's websites are blocked have YT channels or their articles can be read on Telegram with quick view. No VPN needed.
In my POV many moderation/censorship arguments boil down to the desire to be walled off from bad info vs being able to make individual choices. Russian government is doing the former by blocking independent media.
I think if you look at real-world examples with an actual history like reddit... you find that reality is complicated. All those problematic reddit dynamics that you describe exist. But, there were also some advantages/successes to their "moderation" approach.
Above all, these approaches aren't just good/bad or successful/failed. There's a ton of texture. The moderation approach dictates a lot about the platform's character, and that isn't captured by binaries or spectrums.
>If you intentionally chose to ommit the book and it wasn't due to chance omission, perhaps because you hate the author, then yes, it's censorship.
They already told you, the reason the book is not published is because it is off-topic. The bookstore sells Sci-Fi books, and they choose not to sell other genres.
If that is censorship then this definition of censorship isn't useful for any discussion we're having right now.
Moderators will delete comments or ban people for insults against people they like/support, but then cheer on more aggressive insults against people they deem bad (it's suddenly become very trendy to lash out at Elon Musk on Reddit, for example - you'll score upvotes rather than subreddit bans for doing that)
>Yes it's popular and yes there's a lot of people on it and using it, but that doesn't make it a public commons, its ownership does.
Exactly. Folks who complain about the (lack of) moderation on some corporation's platform are, for the most part, certainly welcome to do so.
However, those corporate platforms (unlike public platforms) have no responsibility to host anything they don't want to host.
They are not your government. They are not your friends. They are not a public square. They are businesses whose goal is profit. And that goal isn't necessarily a bad goal either.
However, the business models of those corporate platforms are dependent on showing ads to those who use those platforms. That creates a variety of perverse incentives, including (but not limited to) boosting engagement by pushing outrage and fear buttons to keep folks on the platform, watching the ads.
And so I ask, does the above sound like a public square? It certainly doesn't to me. Rather, it sounds like a bunch of corporate actors taking whatever steps (regardless of impact on discourse) to maximize profit.
Again, that's not inherently a bad thing. But it doesn't (and never will) fit the bill for a "public square."
In SMTP servers I've managed for clients we typically block anywhere from 80 to 99.999% (yes 10000 blocked to one success) messages. I'd call that MegaModeration if there was such a term.
And if you think email spam is solved then I don't believe you read HN often as there is a common complaint of "Gmail is blocking anything I send, I'm a low volume non-commercial sender"
In addition email filtering is extremely slow to react to new methods, generally taking hours depending on the reporting system.
Lastly, you've not thought about the problem much. How are you going to rapidly detect the difference between a fun meme that spreads virally versus an attack against an individual. Far more often you're going to be blocking something that's not a bad thing.
It's exhausting to wade through all of those.
There has been a general coarsening of the culture which has gotten worse since the 2010s, Donald Trump was certainly a part of it.
I was talking about it with my wife this morning and she thinks that people have been getting more concerned about the homeless colony in a nearby city because the people who live there have been getting angrier and nastier. Other people down our road have put up signs that say "SLOW THE FUCK DOWN!"
There are the nihilistic forms of protest such as the people who are attacking paintings in museums to protest climate change. (Why don't they blow up a gas station?)
And of course there are the people on the right and left who believe they can "create their own reality" whether it is about the 2020 election or vaccines or about gender.
While the XKCD explains that your “United States 1st amendment rights” are not be violated by non-governmental censorship, it doesn’t change the fact it is censorship. Everyone is in favor of censorship (like Randall) so long as only things they disagree with are censored.
Fair enough, not sure that is the point of the XKCD comic though. I believe the point of that comic was "banning racists from twitter is not a first-amendment issue".
I believe the public discourse is affected by much more than the government. To keep that public discourse free enough is important for democracy to function. Hence I fear more than just government trying to repress certain forms of speech.
That doesn't mean I want to legally ban all such repression. I also don't believe democracy would be better off if actual neo-nazi's were unbanned on twitter. Instead, I think its important we keep track of repression of free-speech, discuss what people consider acceptable, and reach some rough concensus. Based on this, we can then develop either alternative platforms, or make some well-thought out new laws.
Outside the US this problem typically gets even worse. For one, why is your country depending on a (generally) US company for its freedom of speech? And two, outside the US freedom of speech laws are typically significantly different than the US model.
There are occasional repeat harassers. But the usual situation is "somebody posts about one of my friends to their circle and suddenly a gazillion hate messages arrive from a gazillion different people." The only option to prevent this would be "see zero dms or comments on your posts by people you haven't explicitly allowlisted," which works badly if communicating with an ever-shifting professional network is a part of your job.
I find that people who write things like this are expressing a distaste for politics taking place on Twitter. They tend to be expressing how they think things should be, and I can even sympathize, but that has no bearing on how things are.
How things actually are: the vast majority of politicians of any importance in the west (and probably not only) have staff dedicated to maintaining their Twitter presence. Most institutions use Twitter as an important communication channel, including the various institution the make up the EU, the USA and the UN. You can check this for yourself. If you actually talk with journalists, you will understand that Twitter is now central to everything they do, and that trickles down to everyone else.
10% is actually a huge number. The majority of people do not have a public voice nor an interest in actively participating in politics. Politics is made of "polarised people looking for attention". Always has been.
You have to be living under a rock to not be aware of all the major geopolitical incidents that take place on Twitter. It was the main communication channel for Donald Trump. We just recently witnessed the incident between Musk and Zelenskyy. A lot of interactions happened there during Brexit. I could go on, and on, and on.
Yes, because it has no value, and because spam is basically financially motivated harassment, and harassment shouldn't be allowed.
Also, how many times have markets been severely manipulated?
1. Blocking people is reactive. It means that everybody still sees the first time somebody DMs them calling them a slur. If you instead take the approach of "block everybody that the ML system thinks is alt-right" or "block every post that the ML system thinks is spam" then you are right back at the fun problems of false positives and defaults.
2. People aren't just concerned about their personal experiences on these services. Advertisers are concerned about their ads showing right next to posts calling jewish people evil. Citizens are concerned about the radicalization effect such that even if I don't see conspiracy posts about liberals eating babies, those vortexes still lead to social harm.
https://hn.algolia.com/settings
dang often references past discussions with search links, so here's a good starting point: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=7&prefix=true&que...
It's not a public square of constructive discussion, it's a public square in which everyone, most of them being absolutely incompetent/uneducated in the subject, has a megaphone and screams their version of reality
What do you mean? It's not in any way illegal to discuss such topics.
At the same time the hive mind is quite often a protective defense against insurgency in forms.
This seems to be a problem with the comments in this entire post. We're taking community as individuals doing individual things, and in small forums this is commonly true. But, when the group grows larger and money is on the line that assumption should be discarded. In astro-turfing for example, a seemingly large group of 'users' will direct communication on your forms via somewhat 'rational' communication, but possibly disliked by a lot of members. Then you'll notice a group that seemingly counters the astroturf to the level of absurdity that turns more 'hearts and minds' towards the astroturfers (guess what, the counter turfers were also the astroturfers).
You typically end up with one of two situations. The forum either takes on the ideas of the astroturf group and it becomes encoded in their ideals, or the fend it off, but in doing so embrace some of the extremism implanted by the astroturfing group in the first place.
Also, what happens to any group when 4chan decides to raid you for the lulz?
This is like saying "no moderation is _essentially_ the same as moderation because you can just choose not to read posts." I suppose it's simplistically true if you squint hard enough and actively ignore the issues people care about, but in that case you're not left with a particularly useful statement.
Let's look at the proposal vs. how Reddit currently works. Let's say you have a sub called /r/soda, there's a rule where you can't "promote sodas," and they'll ban you for rule violations if you say "Coke is my favorite" but not if you say "Pepsi is my favorite" (selective enforcement of rules, even by site administrators is common on Reddit). 45% of the users love Coke, 30% love Pepsi, but 100% of the posts about what soda people live are about Pepsi.
So with the proposal you make a post about how much you love Coke, notice that the post is deleted, then choose to ignore moderation and see all the other posts by other Coke users of the sub that have had a similar journey. You continue to discuss things with many of the people on the sub like you did before.
With the current way Reddit works, you get banned and then start your own sub. But no one knows about your sub, the vast majority of new subs die, and even the ones that are moderately successful take years of work to gain a community. No one in /r/soda might even realize that "Coke is my favorite" posts are banned if they hadn't made such posts themselves, since there's no way to see what's banned and what isn't. The users there are kept completely ignorant of the need to create another sub.
So now you spend hours trying to promote your sub in various places and creating enough content for it that people who visit will actually use it and not just see a dead sub and move on. If you're lucky, and with a lot of work, in a year you might be able to reach a small fraction of the audience that was in /r/soda, and tell that small group of people "Coke is my favorite."
And even then, Reddit admins can look at you askance and decide to shut down your sub. I've seen multiple subs say "We can't even have a friendly discussion about [particular_topic] because Reddit admins have said they'll shut us down if we do." Even things that other subs are allowed to talk about (again, the rules are applied rather arbitrarily).
I can't see how the proposal is like Reddit in any meaningful way.
This is the natural state anyway because most people are idiots about most things.
Yes, there is some knowledge for some internet savvy types who grew up with the internet, but a lot of people are casual users. Many people still feel anonymity gives them carte blanche to be a jerk, or worse.
The amount of effort to be online is zero, but the amount of effort of people to behave is sometimes also zero (or low), of course depending on context. HN is a lot more civilized, but if it stopped being moderated it would in time be a nasty place as well.
I don't think that definition works.
Mostly quit Reddit when I realized about 5% of my posts were shadow deleted for holding the wrong opinion.
Make a post about celebrities, politics, or religion (that's not tech related) and see how long before it's flagged out of existence.
So as somebody who noticed this bit of drama, and looked into it, I can explain. It's actually all very simple. Here goes:
It's a stunt!
Yup, they say that much. They tried protesting, they tried blocking roads, but were making page 10 of the newspaper. So they came out with some dramatic, outrageous plan that they knew wouldn't do harm (they planned this well in advance, and glued themselves to glass, not to the actual painting) but would be weird enough for people to talk about it. Plus there's a degree of symbolism in it.
> (Why don't they blow up a gas station?)
Because you can't protest oil infrastructure in any effective way. Blow up something? That's terrorism. Glue yourself to a gas pump? You'll get insulted and probably dragged off, plus gas stations are kind of meaningless and replaceable and often not anywhere very interesting. Protest at oil infrastructure? It's typically remotely located, and secured. You won't be noticed before you're removed. Block Shell's HQ? Good luck blocking a huge building with multiple entrances and security.
Point being there's nothing oil related I can think of where you could cause some sort of disturbance, quickly get attention, have the press get to you before you got forcefully removed from there, and have the story be interesting enough to have a prominent place in the news.
No there really is not because she and many others have the habitat of calling all criticism "harassment", and then posting a couple of example many of which are not even harassment
>no one has an ethical obligation to make themselves endure racist and misogynist abuse.
Sure, but we would first need to settle on a definition of what is "racist" and "misogynist" because if you use AOC definition of those word I can assure you we do not agree on what would be considered "racist" and "misogynist", because AOC thinks someone saying "We need strong border security" is racist.
Indeed. But people like those things, and use them as anchors for their own political views. Ninteenth-century views of freedom of speech excluded huge areas of material under "obscenity", much of which simply isn't obscene now in the west. Such as "information about contraception".
The people who think the Jan 6 attack was a good idea will add it to the list of other things leftists do that they think justify the Jan 6 attack.
For that matter I'd say that a lot of what "Black Lives Matter" does is also nihilistic. That is, there is not a lot of expectation that things will change because their ideology doesn't believe that things can change and because it won't look at the variables that could be changed to make a difference. What I do know is that some investigator will come around in 20 years and ask "why is this neighborhood a food desert?" but the odds are worse than 50% that they'll conclude that "it used to have a supermarket but it got burned down in a riot" is part of the answer. In the meantime conservatives will deny that the concept of a "food desert" is meaningful at all and also say that Jan 6 was OK because leftists are always burning down their neighborhoods and getting away with it -- except you (almost) never get away with burning down your neighborhood in terms of the lasting damage it does to your community unless your community is in the gentrification fast track, see
https://en.wikipedia.org/wiki/Crown_Heights_riot
(It might be the sample I see, but I know a few right-wingers who admit that there is a lot of craziness on their side but it is justified by what the other side does whereas I never hear from leftists that it's justifiable to say that "A trans woman is indistinguishable from a natural woman" because of something stupid a conservative did.)
As to "many cases where online communities document or facilitate crimes elsewhere", why criminalise the speech if the action is already criminalised?
That leaves only "Campaigns to harass individuals and groups". Why wouldn't moderation tools as powerful as the ones employed by Twitter's own moderators deal with that?
[1] https://mtsu.edu/first-amendment/article/970/incitement-to-i...
Since I was not born with a language, yes I've been told how to talk for a sizeable portion of my life.
In fact learning things like tact and politeness, especially as it relates to the society I live in, has been monumental in my success.
Do you go to your parents house and tell them to screw off? Do you go to work and open your mouth like a raging dumpster fire? Do you have no filter talking to your husband/wife/significant other? Simply put your addition to the discussion is that of an impudent child. "I want everything and I want to give up nothing" is how an individual becomes an outcast, and I severely doubt this is how you actually live outside the magical place known as the internet, though I may be surprised.
It didn't used to be. It used to be pretty good, but a handful of censorious mods insisted that they needed tools to fight exactly the same sorts of things that OP is insisting that moderation is for - illegal content, real harassment - and then immediately started using those tools to purge political enemies.
It's a fun example because of how wrong Hollywood (and intuition) gets this one. You're on an elevator and an evil terrorist cuts the cables! Oh no! What happens next!? Not much, besides you being annoyed at probably being stuck somewhere in between floors. People had to be persuaded that the technology was safe and so Elisha Otis' [1] regular demonstrations of his safety stopping invention is a big part of the reason of why elevators were able to take off. It's practically impossible to make an elevator fall down a shaft.
Now us growing up with them simply take everything for granted to the point we have absolutely no clue at all about what we're using, but always have used it, so just assume it must be okay as is.
[1] - https://en.wikipedia.org/wiki/Elisha_Otis#Lasting_success
Let's assume that you are not a child, that you are confident in your ability to manage your snark and, most of all, highly value your conversation.
I'm going to conclude that yes, you definitely dislike being told how to talk.
It’s not technically illegal to have those conversations, but it’s in some kind of a grey area, because if you’re having conversations like those; the immediate question is of course why…it’s tough to find reasons to bring up that topic other than the obvious.
As to moderation, why not be able to filter by several factors, like "confidence level this account is a spammer"? Or perhaps "limit tweets to X number per account", or "filter by chattiness". I have some accounts I follow (not on Twitter, I haven't used it logged in in years) that post a lot, I wish I could turn down the volume, so to speak.
What do you mean? They got what they wanted, more or less. They're a group of people organized around an idea, figured they weren't getting attention, so they went to look for a way to get some. That's all there is to it.
I think you're expecting some sort of special significance here. No, it's not complicated or even special.
That is, it's not clear in the US you can ban something on the basis of it being immoral, you need to have the justification that it is "documentation of a crime".
>How does it feel to know that in 2 weeks you will be voted out? What a loser you are. The People of NY hate you. After you lose the election, you should disappear forever. Go to Puerto Rico fix you abuelas roof and stay living there
Now, I personally think the racist trope of "go back to where you came from" is pretty obvious. I also think that the fact that I can find such comments so easily is fairly telling. But let's ignore that and just point out that people have literally been jailed for harassing AOC and sending her death threats.
So let's back up, you can make a judgement about the extent to which you value free speech, but you have to do that grounded in reality.
It's the threat of law enforcement that leads people who run websites to remove illegal content.
Generically (to, say, please advertisers) that is an expectation that sites are going to be proactive about removing offensive (or illegal) material. Simply responding on a "whack-a-mole" basis is not good enough. I ran a site that had something like 1-in-10,000 offensive (not illegal... but images of dead nazis, people with terrible tumors on their genitals, etc.) images and that was not clean enough for Adsense. From the viewpoint of quality control, particularly the Deming viewpoint of statistical quality control, it is an absolute bear of a problem to find offensive images at that level -- and look at how many people write a paper about some A.I. program that gets 70% accuracy is state of the art.
I don't think it's even anonymity, for some, indirect communication is enough: I once had a roommate who would leave unpleasant messages on the answering machine, but would be perfectly nice in person (on the same topic, even).
Spam may still leak into our inboxes today, but the level of user control over email spam is generally a stable equilibrium, the level of outrage around spam filters — and to be clear, there are arguments to be made that spam filters are increasingly biased — is much MUCH lower than that around platform "censorship".
That’s a nice outcome, but also leave you vulnerable to outsiders deciding to ruin your sub by flooding it with discussions of table tennis or racism or arguing about moderation.
But you make a good point if the differences between the OP and Reddit.
They're both just tactics used to decieve.
The internet is this amazing tool for building knowledge and we seen to be arguing about who is allowed to tell lies rather than collaborating on how to discover truth.
It's simple basic things like citing sources that need to become norms.
> A minimum viable product for moderation without censorship is for a platform to do exactly the same thing they’re doing now...but have an opt-in setting
There's one problem with that. Often times, the product itself is the moderated version.
Letting users use a product with moderation turned off would be giving them what they want, but it would not be giving them "the product".
---
For example, people want a space where they can talk about mechanical keyboards. So they go to https://old.reddit.com/r/mechanicalkeyboards/
Some people want to sell their custom mechanical keyboards. They think, "I know where a bunch of potential customers are, /r/mk/!"
Queue the sub getting flooded with sales posts, and the moderators banning such posts. Now they have rule #2 and moderate accordingly.
OP's solution is to castrate the moderators by instead of allowing them to remove posts, only allowing them to hide posts. Then users who want to break the rules can simply toggle them.
But we already have a better solution: Just go somewhere else! Right there in the text of rule #2, there is a link to /r/mechmarket/, which is its own subreddit, moderated for the buying and selling of mechanical keyboards.
---
But, of course, that doesn't work with social media. There is only one Facebook. Only one Twitter. They are global namespaces. There is no room for traditional moderation. And that is its own problem.
It's hard to have meaningful conversation when every participant in your social circle, or even in the world is standing at their own soapbox. It's as useless as a daily company-wide meeting.
---
We don't just need to fight disinformation. We need to fight for information, for discussion. We need to show people who are busily engaged in identity politics that there are more interesting conversations to be had. Social media is a really poorly equipped space for those conversations, because it isn't moderated.
What is spam... exactly? Especially when it comes to a 'generalized' forum. I mean would talking about Kanye be spam or not? It's this way with all celebrities, talking about them increases engagement and drives new business.
Are influencers advertising?
Confidence systems commonly fail across large generalized populations with focused subpopulations. Said subpopulations tend to be adversely affected by moderation because their use of communication differs from the generalized form.
Spam filters are probably one of the single most consistently unreliable pieces of software I ever have to use; regardless of the email provider; or email client I use.
I have to check my junk folder like it’s my inbox.
On both Apple Mail and Outlook; with two different emails - email money transfers (EMTs) will get shoved in my junk box; despite the dozens of times I have marked said emails as not junk.
I’ll get spam emails, but I don’t get mail from newsletters I’ve actually signed up for.
Like…if you’re trying to use spam emails as an example of success; and even a model we should follow for…anything else; I’m going to laugh you out of the room and tell you to keep me the hell away from whatever tools you want to use with that technology.
Spam filtering software for email is at best useless; at its worst; mind numbing log frustrating. It’s a tool I’ll never trust.
But, the whole point of a harassment campaign is to silence someone- to intimidate or bully them until they shut up. What is that if not censorship by other means?
What I'm saying is that sometimes one person's freedom of expression has to be limited in order to protect the freedom of expression of someone else. And there's a balance to be struck there.
But the point is: there is no neutral position here. If you refuse to make a decision about when free expression become unacceptable harassment, you are still making a decision: you are saying that the person with the worst behavior will be the only one whose voice can be heard.
The same thing happens in the outside world, of course: deregulation, for example, does not generally bring freedom, it only shifts the power to whoever has the most money. The principle is the same. You either collectively make a decision about what behavior is tolerable, or you allow the person with the biggest stick to make all the rules. There is no opting out of the decision.
https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you...
Now, "just add a button marked 'see banned content'" to each of these cases. How much easier did Elon's job get?
On its own it doesn't. If you need to recruit people to your cause though you need people to know you exist and there's somewhere they can join.
> Giving up saving the planet for the goal of getting attention is fundamentally nihilistic.
Er, how are they giving up?
What they're doing is regularly shouting "Save the planet!" at people. Only this time they picked a weirder way to do it, because nobody was paying attention to the more normal ways they had to say it.
And protection of victim rights, I suppose.
The solution is for each participant to think critically of their beliefs. There is no way to make that happen. Moderation is the next best thing.
And also, whether they're justified or not, it doesn't really matter to he point. ACX's proposal =/= subreddits.
99% of Twitter is crap and always will be. You can't moderate that away. The question is how to make the 1% discoverable.
More and more often I find myself in conversations with people and suddenly I feel like I'm reading a reddit or twitter thread, and I'll see the conversation follow an exact path that I've seen before, only now in real life.
It's a really strange feeling as someone who has been reading people's opinions on the internet for what feels like forever, and just recently seeing those opinions show up in everyday conversations with people I've known all this time.
Edit: It's especially jarring when you see these people say things like "in my opinion" or "i think", because now I start to wonder "do you really? Or did you just see somebody else say that?" Not that all my thoughts are original, but I don't take credit for things like that.
"Reddit has a paid team called Anti-Evil Operations (part of the "Trust" & "Safety" team) which goes around permanently banning accounts for saying bad words. We made automod block them so you don't lose your account for saying a word and getting reported. It's not our rule, it's the entire website now, we're just trying to look out for our people. Sorry."
I have no doubt we can do better, if we actually tried to build social media with the right tools and incentives.
We could (and should) demand better.
It's got to me more like this.
You have to tell the ESG people that what matters about Exxon Mobil is (1) they have to stop fact investing in producing oil that other people burn, (2) it wouldn't matter if they became a "net zero" company by pumping CO₂ from their oil refineries in the ground and using synthetic fuels in their trucks, (3) it doesn't matter how many women they get on the board.
People who are concerned about climate change in the US should be concerned about institutional reform in the Democratic party. Namely, we shouldn't be in situations like
https://www.inquirer.com/politics/election/senate-debate-pen...
where a lunatic that could be beaten by a ham sandwich could win because the Democrats don't think that Pennsylvania deserves a senator who can verbally communicate effectively. (e.g. out of everybody in the state Philadelphia could get somebody in the top 1% of verbal communication skills as a Senator, why do they have to get somebody who is disabled?)
Which is why I’ve taken the view that the actual solution is in the antitrust space, and not the moderation regulation space.
The problem isn’t that Twitter, Facebook, etc moderate in a way that’s biased, it’s that no entity should be so powerful that their biased moderation becomes a problem for society as a whole.
it is harsh sure, but if that comment falls outside your bounds of "acceptable" speech then your Overton window is VERY VERY NARROW
Anime image boards are not in a hurry to expunge "lolicon" images because they don't face any consequence from having them.
I wouldn't blame Tumbler from banning ero images a few years back because ero image of real people are a lot of trouble. You have child porn, revenge porn, etc. Pornography produced by professionals has documentation about provenance (every performer showed somebody their driver's license, birth certificate, probably got issued a 1099) if this was applied to people posting images from the wild they would say people's privacy is being violated.
If I don't want to see profanity, I should be able to set my filter to exclude profane comments. If I don't want to see nudity, I can set that filter too. Just like movies get a certain rating (G, PG, R, etc.), we should be able to properly label data.
No.
> Are influencers advertising?
That spam is advertising does not make all advertising spam.
We already have filters based on confidence for spam via email, with user feedback involvement too, so I don't need to define it, users of the service can define it for me.
(I didn't down vote him, BTW. His comment is relevant)
Again, I think you're under the impression that this particular event was supposed to be in some way Meaningful. Part of some grand strategy or a big movement or something. I'm telling you it's not.
As far as I can tell, https://juststopoil.org came into existence around February this year. They're just a small, new group formed around opposition to Big Oil that's trying to make some noise. This paintings thing is attempt #25, and it just happens to be weird enough to make the news, but not fundamentally different to the 24 that came before it.
In fact they tried previously gluing themselves to microphone at a news agency:
https://juststopoil.org/2022/04/03/just-stop-oil-supporter-g...
I see no indication that this is part of some grand strategy from the Democrats or something. No, it's just a small group doing a weird thing and getting news coverage because weird thing is weird.
Edit: And in fact, Just Stop Oil is UK based, so they have nothing to do with the US Democrats or Pennsylvania.
I didn't say to expect they'll behave like grownups in that they wont post anything immature, bad, etc. I said "treat people as grownups", that is, as capable on seeing something they don't like or find offensive or whatever. And if they're not capable, that's on them.
So, if a discussion becomes a flamewar with "thousands of posts", so be it. Members can always ignore it.
So, if the thousands of posts are from the same small number people (over-posting) and others find those annoying, then can chose to invidividually to ban them, or snooze them, or not.
But, if the thousands of posts are by thousants of members (and not bots), then why shouldn't they be left to continue to post and discuss this way, even if its a flame war? They're having fun, and others can ignore or ban them.
Now, if they verbally abuse someone though (e.g. threaten their life, dox them, and such), well, that could be moderated and members who do that could be banned. The rest of opinion, whether deemed controversial, unpopular, misinformation, or bullshit, can stay.
I don't care much about "Brandolini's law". Who is the arbiter of what's bullshit and why are they? The moderator? Well, that's tautological (they're arbiter of non-bullshit merely because they have the power to moderate).
This is an important point, I think. There's a generational aspect to this. Those of us who came of age prior to the internet (and especially social media) being ubiquitous don't really have an expectation that we're owed a forum where we can just say anything that's on our mind. As one of those olds, whenever I hear people complaining about "censorship" on whatever social media platform it kind of sounds entitled to my ears. We didn't expect to have a platform prior to about 2005 or so. We didn't have 'followers'. We discussed politics with a few friends in a bar over drinks. But now so many people seem to expect these private companies to provide them with a platform where they should be able to say whatever they want. Freedom of speech doesn't guarantee you a platform for that speech.
Automoderator rules took care of 90% of the spammy issues. Some things were obvious and chronic, and like porn vs. art, "I know it when I see it." Incivility was pretty easy to identify and call out, but there were a few people that toed the line and would seep toxicity into the sub rather than dump toxicity flamewar-style. They'd never do any one thing to get themselves banned, as they'd very carefully adhere to the letter of the law (sub rules). For a while there was a lot of SPAC hyping, to the point that I had to create a rule just for that. More on that later.
Topic-wise, most of what people viewed as toxic was the drama around Tesla vs. the rest of the industry. There would be people wrapping around the pole on EPA range, charging infrastructure, fit-and-finish, software features, sound isolation, handling characteristics, straight-line 0-60, buying experience, their opinion of Elon Musk, FSD, the existence of a steering wheel, etc. etc. People would often appeal to the mods to try to either take sides or tone down the heat.
Occasionally people would pop in with an opinion on hydrogen fuel cell vehicles, which to me seemed like a reasonable topic to discuss in a forum about alternative-energy propulsion systems for vehicles, and they'd get shouted out of the forum as not "EV" enough. (Or worse, "Hydrogen is a pipe dream that will never happen to shut up about it already!") Gatekeepers would insist that the only thing anyone could talk about was passenger vehicles with large battery packs, or maybe a picture of an electric bus every once in a while. Posts about electric bikes or boats would get ignored or called out as, "Not the right kind of vehicle for this sub."
Inevitably the mod queue would get filled up with reports for such topics the gatekeepers didn't think are "on-topic" enough. This was the grey area where I had to make a call as a moderator. If a SPAC hype post got downvoted about as much as an electric bicycle post, by what criteria could I justify removing the SPAC post and letting votes decide what happens with the e-bike post?
My solution was to pop it up to a meta-conversation in the forum. "Let's talk about the rules. I've noticed posts with this characteristic or that. What would we think of disallowing posts like this and allowing posts like that?" There would be opinions on both sides, but ultimately I had to rely on my own judgment of "What's reasonable?" when making the final call on the rules.
Moderation of a public forum is very much a human problem, and there will always be corner cases. It reminded me a lot of what I learned in a graduate class I took on intellectual property law back when I was in school. There will always be a contour, and there will always be "test" cases that push and pull on the boundaries. Having rules ("laws") that set the groundwork for decisions is important. The process for establishing (and changing) those rules should be transparent and inclusive. No rule is going to have 100% support from all sides, but to build a system that works, we need to be able to agree on the process, respect the rules that we converge upon, and challenge rules in a civil manner that become obsolete as the world moves forward.
I also get a bit tired of looking someone up and it has "so and so says this person is <insert bad thing>", claims that usually stack up about as well as that SPLC claim against Maajid Nawaz[1] did.
Given this, I find it hard to see how they're doing better than the other companies you mention.
[1] https://en.wikipedia.org/wiki/Majid_Nawaz#Claim_by_Southern_...
The vast majority of removed comments are made to shape the conversations.
I think most people would be ok with letting admins remove illegal content, while allowing moderators shape content, as long as users could opt-in to seeing content the mods censored.
This is a win-win. If people don't want to see content they feel is offensive, they don't have to.
Let the user decide.
[1]:https://en.wikipedia.org/wiki/Go_back_to_where_you_came_from
[2]:https://en.wikipedia.org/wiki/Category:Racism_in_the_United_...
The hypothetical is too reductive to be helpful in making that decision. There are other datapoints and social framing that would be needed to answer your question. As it stands, it's like having one equation and ten unknowns and asking what the solution is - it depends.
Well, that's the "treat people as grown ups part". In that: treat them as if they can read something they disagree with the "first time" and they wont melt.
Calling people slurs or violent threats etc could always still be banned - first time you do it, you're out, or three strikes, or similar.
That's unrelated to content (whether the content is controversial or some disagrees with the view, etc), and easy to implement and check.
>Advertisers are concerned about their ads
Sucks to be them then! Advertisers shouldn't stiffle speech.
Disney also didn't like to be associated with gay content, not that long ago. And all kind of partisan political views could be pushed for or against by advertisers. They should not have such a say.
In fact I think they should not be allowed by law to be picky on placement on any forum of speech (magazines, social media, etc) where they like to have their ads in.
Either they shun the medium altogether, or they buy slots that can appear whenever, alongside whatever. This way also people know it's not the advertisers choice or responsibility of being alongside X post, as they can only buy slots on the whole medium wholesale.
If it's helpful, this organization has in fact actively sabotaged oil infrastructure in the past to protest and no one gave a single shit. They had a whole week where they decommissioned several pumps back in August. I think its helpful instead of asking "why don't they <obvious>" to assume someone has already tried it.
Note also how I mentioned people repeating low-effort arguments. The tedium comes from the stream of people who come, repeat someone else’s idea, aren’t prepared or willing to engage intellectually, and whine about censorship when nobody finds that compelling. Anyone who spends much time in a particular forum can recognize that and see that there’ll be very little value from engaging. We see that a lot here where people complain that HN is biased against cryptocurrency because the response to “have you accepted our lord and savior bitcoin into your heart?” was not well received by people who remember the exact same claims being made a decade ago.
No where did the person say "Go back where you come from", can you read it that you if you want to sure. but I can (and do) read it in other ways.
This is part of the problem where everyone sees every comment as a "dog whistle" to racism. Sorry I reject that reading
Which inspires another really weird, super uncomfortable thought. If the CSAM producers had cheap, reliable methods of creating their awful content without the use of real people, would that reduce the harm done?
I can't remember the last time I felt so conflicted just asking a question, but there it is.
These are global platforms with global membership, simply stating that “if it is free speech in America it should be allowed” isn’t a workable concept.
People don't really have "conversations" on social media. Most activity is posting updates about oneself, but the far bigger part is screaming at each other and playing engagement games.
The goal of all this activity is not to debate, converse or exchange information. The goal is to win by being maximally controversial, as that's the behavior that is rewarded.
As such, Twitter is opposite to real life. If you'd talk and behave as people do on Twitter in the real world, you'd be ousted in a day or may even wake up in the hospital.
In a dynamic where bad faith is the default, you can't apply good faith principles.
It's massively complex to address. On the one hand, you have almost no accountability regarding your speech on Twitter, yet incidentally too much: mob attacks / cancel culture. Too free and too restricted at once.
Personally, I think what you can and cannot say is a massive distraction from the real issue: what gets amplified. Reasonable conversation is pointless and hot takes win. It should be the opposite, just like in real life.
You already get people citing things they clearly haven't read, but again, that's still better than not even citing something as it gives a basis to work towards the truth.
I get that no machine learning is 100% perfect which is why it should be used as an indicator rather than the deciding factor.
I have had issues with gmail blocking emails but as you point out it was always because of ip reputation not over zealous Naive Bayes.
[1] https://demos.co.uk/press-release/staggering-scale-of-social...
It wasn't supposed to be that way. Even the Reddiquette page told people not to downvote simply because they disagree. But nobody reads Reddiquette, and these days most redditors think disagreement is the purpose of downvotes.
That being said, you'd have to be naive to think downvoting for disagreement doesn't happen on HN.
> post throttling
This is only a thing for new accounts as an anti-spam measure.
> over zealous moderators banning people for wrongthink
I think it's wrong to blame reddit for this. This will be a problem on ANY site that allows users to create their own communities within it.
The key point the author of the article makes is the difference between moderation and censorship: you can opt-in to see moderated content, but you're unilaterally prevented from seeing censored content.
What Reddit does (removing posts, comments, banning accounts) falls under the definition of censorship here -- within the platform itself, obviously.
Coming back to computer physics, simply put we don't have access to unlimited energy and storage space. I can generate trash faster than you can install servers to keep it, and much faster than anyone can afford to pay for the space. Companies that do not control spam simply go out of business, industrial Darwinism.
You can ignore physics as much as you want, but it's not ignoring you.
So is trying to destroy cultural heritage. I see no qualitative difference between trying to deface a Vermeer and blowing up the Afghan Stone Buddhas.
There have been a handful (fewer than 10) extreme cases over the years where we temporarily blocked someone from posting, but that almost never happens—it's an emergency measure.
For anyone wondering why we allow certain banned accounts to keep posting, even though what they post is so dreadful, the answer is that if we didn't, they would just create a new account and that new account would start off unbanned, which would be a step backwards.
You can. But you'll still have people screaming about how they were actually silenced for their political views. Which is exactly the situation we have today.
Now, like electricity and water, it's become so fundamentally entwined with modern living that folks see it (maybe rightfully) as a common right.
edit: I'm not sure it's generational as much - the folks complaining about it the loudest seem to be older, non-technical folks.
>Go to Puerto Rico fix you abuelas roof and stay living there
I don't think it's a leap to relate that to go back where you came from, and I don't think other people think that either - since the wikipedia page literally cites as an example of this racist trope.... Donald Trump saying
>Why don't they go back and help fix the totally broken and crime infested places from which they came...
Who did he say that in reference to? AOC.
So it's not really a leap is it. But what I'm interested in, is that you claim to read a different meaning into this, so be specific, what do you think the person posting in reply to AOC meant by that?
It isn't one time. You get a "first time" with each new harasser. It becomes a regular occurrence that when you open your inbox somebody is there shitting on you.
> Calling people slurs or violent threats etc could always still be banned - first time you do it, you're out, or three strikes, or similar.
Why? The whole point of the idea is that people don't get banned. Returning to "well, sufficiently bad users will be banned" is just returning to the state today with people completely disagreeing about what "sufficiently bad users" means.
> Sucks to be them then! Advertisers shouldn't stiffle speech.
The "public forums" (twitter, youtube, facebook) are all ad supported. Without advertisers those products simply die.
Mass taggers have historically been abused to ban or shadow-ban users who've posted in "bad" subreddits.
If you argued with someone in r/the Donald, you'd magically be unable to participate in a large swath of unrelated communities. Trying to appeal the bans would often result in you being permanently muted or receiving a snarky response from the mods saying it's your fault for engaging in said 'bad' communities.
Yeah this worked 20+ years ago but it doesn't work now.
There are a small handful of monopolies of places to go on the internet. You are basically suggesting "well make your own forum and then you can control it!". Come on, you know that doesn't make any sense in this day and age.
>Most truly smart people I met, were often rather prickly characters more concerned with data than being popular. ;)
I've also found people who think they are smart but really don't know what they are talking about because they are living in a complete bubble and ignoring reality tend to end their posts with smiley faces.
It's because there are assholes everywhere. They are small in number, but they are pretty evenly spread throughout the population. Regardless of ethnicity, socio-economic status, age or any other demographic detail, they are everywhere.
And they always have been, and likely always will be.
I suppose that social media dynamic allows them to disproportionately visit their douchebaggery on the rest of us, but that's not "Eternal September." That's just humanity.
The company that provides the service defines the moderation because the company pays for the servers. If you start posting 'bullshit' that doesn't eventually pay for the servers and/or drives users away money will be the moderator. There is no magic free servers out there in the world capable of unlimited space and processing power.
It doesn't feel like it's fundamentally entwined like electricity or water - It would be tough to live without electricity or water. But I live just fine without social media - in fact, I think my quality of life has gone up after deleting my twitter account back in May. And to a large degree, I think we're worse off as a society than we were prior to the emergence of social media.
I'd be willing to bet that if you could somehow run an experiment in parallel where you had one Reddit with real bans, and one with soft bans, the quality and nature of interactions on the soft ban one would be much, much worse even outside of banned communities.
Which is that it's decreed by a government or similar institution, and that it is enforced by law? That it's about suppressing ideas/material in a whole society?
Whatever privately run sites/newspapers/organizations do, it's other things -- editorial policy, moderation, curation. But by definition it's not censorship. In the same way that having a crappy job isn't slavery, and a serial killer isn't committing genocide. Privately banning certain content on a single site, or newspaper, is editorial policy, end of story.
Words mean things, and a lot of people have fought long and hard against actual censorship, such as the Comstock laws. Let's not cheapen freedom from censorship by turning it into "but I want to say whatever I want anywhere to anyone who wants to listen!", which is what OP is proposing. Freedom from censorship is about having the right to speak -- but it's not, and never has been, about making anyone else give you a platform for it. And this distinction is vitally important.
When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads?
But I'm starting to get his point about moderation being based on repetitive behaviour, and not content. I wonder if that's why he keeps interrupting himself about his trees.
(For the record, I appreciate his trees project; don't see this as an attack on that.)
> Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4] Censorship can be conducted by governments,[5] private institutions and other controlling bodies
I can paste more, but if you just google “define censorship” I don’t think there’s a result on the first page that supports your claim.
I'll give up some of my freedom, to limit everyone else's freedom every day of the week - the only concern I have is roughly the IQ of the people doing the limiting.
This dichotomy (free speech for all, but no one is required to offer a platform) works in liberal societies because you have a diversity of publishers.
Of course. That is what they've demanded, so that is what they get.
> "When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. "
On the contrary: You must have this. As a matter of law. There is no alternative, other than withdrawing from those countries entirely and ignoring the issue of people accessing your site anyway (which is what happens in certain extreme situations, states under sanctions, etc)
> " It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads? "
Here are the options:
1) Do not do business in those countries.
2) Provide different services for those countries to reflect their legal requirements.
There is no way to provide a globally consistent experience because laws are often in mutual conflict (one state will for example prohibit discussion of homosexuality and another state will prohibit discriminating on the basis of sexual preference)
I may be unique in this regard, but I am aware of the fact that sometimes I make mistakes, and I don’t highly value all of my conversation, sometimes I just rant, or enjoy engaging in more superfluous conversation. Sometimes the best conversations aren’t “highly valuable”!
This is also the approach celebrities in general need to take as they get drowned in messages (elon musk could spend 24/7 reading messages sent to him and read only a tiny fraction), so harassment can probably be solved by whatever solution we come up with for celebs.
Can make some fairly elaborate "allow" rules depending on why you might want to read messages from non-contacts like "this person is a contact of a contact" or "this person has "IT" in their Twitter bio or "this person is on the 'good person' white list that Mr. Whitelist maintains for the community".
Nobody is "telling you how to talk." People are free to choose their terms for voluntary social interactions. You don't have a right to inflict yourself on others who wish not to interact with you.
Reddit, itself, is, or at least used to be, a variety of diverse communities. I don't care about either /r/FatPeopleHate or /r/FatPeopleLove I don't consider myself a part of those communities. I subscribe to subreddits I want to track, and I am not a member of the ones that I don't.
Surely people on the disagreeable side of the psychological spectrum will gravitate towards some communities and people on the opposite side of that spectrum will gravitate towards other communities. Some communities are cross-cutting, and so have to be moderated in a different way altogether (which Reddit already accommodates). Other than that, communities have their own social protocols. Creating blanket rules / bans / restrictions across communities restricts the organic nature of human interaction and hamstrings it in a rather depressing way.
Many problems arise in the battle for "front page" or "trending" screens that try to blend content from multiple communities and invite competition or "raids" or what have you from opposing sides. Personally I hate such things, I have no desire to be manipulated by them and use browser extensions to block them. But given that they exist, it's again mainly a moderation / preferences problem. Give the control to the user over what they want to see.
The most interesting number is the 1300 submissions because that hasn't grown since 2011 - it just fluctuates. Everything else has been growing more or less linearly for a long time, which is how we like it.
This does not stop the FBI from being a major child porn distributor, despite that meaning the FBI is re-abusing thousands of victims under this rubric.
If they started removing low-emotion information and discussions that just didn't fit the Bogleheads philosophy, I think that would cross the line into censorship.
Anyway, it's clear that the Bogleheads forum model is the polar opposite of where Facebook, Twitter, and Reddit have gone to suck in the masses and increase their engagement by highlighting the most heated stuff and throwing gasoline onto the fire with likes, votes, and retweets. I think the mainstream social media companies have put themselves into a bind with this.
I don't think this is the correct distinction he is making. He defines moderation as a receiver being able to choose whether they want to see certain content. He defines censorship as a third-party deciding if a receiver can see certain content whether or not they want it. McCarthyism would be censorship under that definition.
This is basically what he describes in the article as a form of moderation: "If you wanted to get fancy, you could have a bunch of filters - harassing content, sexually explicit content, conspiracy theories - and let people toggle which ones they wanted to see vs. avoid."
Which is neither here, nor there, as the stakes in a discussion forum or media are not "survival". Nor is the danger from something you don't like (or tons of them) life threatening.
>Coming back to computer physics, simply put we don't have access to unlimited energy and storage space. I can generate trash faster than you can install servers to keep it,
Again, neither here nor there. That is about spam, our subject is moderation. Gmail, for example, also has spam filters, but we don't consider it moderation...
The whole point of whose idea? I'm discussing the subject of moderation, as in, not being moderated or banned for content.
Not the subject of not being banned for anything, ever. That is, spam, bots, personal threats, cp, could always be banned, and I'd be fine with it.
>is just returning to the state today with people completely disagreeing about what "sufficiently bad users" means.
The disagreement occurs because this is based on beliefs and ideas. But this idea or that idea, based on ideology, partisanship, etc....
If instead the banning was solely based on the type of content (e.g. no spam, threats, cp, automated mass posting) then there's infinitely less room for disagreement. Something either is spam or is not. Either is a threat or not. CP or not, and most people can agree on that.
Even if not everybody agrees on whether X is spam ("I think it's good, because it informs us about a product we didn't know about"), it's much much less than people disagreeing on what's a bad take on politics, or "disinformation", or such, and much freer speech.
>The "public forums" (twitter, youtube, facebook) are all ad supported. Without advertisers those products simply die.
That's a bonus!
This is sort of talking around an argument. You could say the same thing about a subreddit dedicated to re-electing a local alderman because of his policy on the maintenance of public parks. Speech is meant to inform, or to affect change.
The question is whether you're going to use an online annoyance argument to moderate controversy on a platform. If the justification for why you're going to moderate speech is that people who are not annoyed by that speech might react to it, you've moved squarely into making "genuine arguments for true censorship: that is, for blocking speech that both sides want to hear."
I mean, as discussion forms commonly dox people, or brigade and convince members to go kill people IRL I really think maybe you're incorrect.
>Gmail, for example, also has spam filters, but we don't consider it moderation...
We whom? This has been debated on HN for as long as HN existed. Most would consider it moderation, but seemingly as a whole we have given up the battle as spammers are a plague of locust that will consume all.
>Again, neither here nor there.
Handwaves away physics, good way to accept technical reality of the situation here.
> The goal of all this activity is not to debate, converse or exchange information. The goal is to win by being maximally controversial, as that's the behavior that is rewarded.
> the real issue: what gets amplified
But it can be (at least partially) fixed if you change the optimization function.
I advocate for that here: https://www.belfercenter.org/publication/bridging-based-rank...
And Twitter's Birdwatch (that Elon recently got all excited about when it fact checked the White House: https://twitter.com/metaviv/status/1587884806020415491) actually does this "bridging-based ranking" for adding context on tweets.
Here's the paper with details on how it works for Birdwatch: https://github.com/twitter/birdwatch/blob/main/birdwatch_pap... (you can also check out the source code in that repo).
My point is that the actions he categorizes as "moderation" are in fact not sufficient to achieve the goal. Thus, even a platform who is purely concerned with providing a service will need to undertake actions he categorizes as "censorship" (or at least would have to come up with some unknown new system of moderation, since the one he proposes is insufficient).
In my experience, this depends on the community size.
In a small community, "platforming" assholes (rather than "deplatforming" them) may act to retain the assholes as users (where they would otherwise leave for lack of platform); and then those asshole-users, since they're already there, may also interact in other, non-quarantined subforums on the site — to other users' detriment.
In a large community (society, really), e.g. Reddit or Twitter, the asshole-users are going to stick around either way, since people have multiple interests, and the site likely already gives them many other things they want besides just "a plaform to talk about their asshole opinions on." They weren't there primarily to be assholes; they just are assholes, but are going to stick around either way.
So, for large sites, the only real decision you're making by quarantining vs banning a certain sub-community that's full of assholes (rather than doing active moderation of certain discussion topics, regardless of where they occur) is whether the asshole-users' conversations mostly end up occurring in the quarantined forum, or spread out across the rest of the site where they can't be hidden.
It's a bit like prostitution regulation. Prostitution is going to happen in a city no matter what; it's just a question of whether such activity is "legible" or "illegible" to city government. Some cities choose to have an explicitly-designated red-light district and licensing for sex workers; these cities at least ensure that any activity associated with prostitution — e.g. human trafficking, gang violence, etc — occurs mostly within that district, where police presence can be focused. Most cities, though, choose to "protect their image" by having no such district. This option does not result in less prostitution; it only hides it throughout the city, making police investigation of crimes related to sex work much less likely to be reported, and much more difficult to investigate.
So use legislation to tackle that problem. You're admitting it's not really a free speech issue.
More transparent systems with less suppression or banning are clearly possible, but commercial entities don't want to hold themselves to strict rules which is why they keep the rules and processes opaque. This same trend is seen in both social media and app stores.
What you can enforce is "so and so says it is illegal" (accurate 90% or 99% or 99.9% of the time but not 100%) or some boundary that is so far away from illegal that you never have to use the ultimate truth procedure. The same approach works against civil lawsuits, boycotts and other pressure which can be brought to bear.
I think of a certain anime image board which contains content so offensive it can't even host ads for porn that stopped taking images of cosplayers or any real life people because it eliminated moderation problems that otherwise would be difficult.
There is also spam (should spam filters for email be banned because the violate the free speech of spammers?) and other forms of disingenuous communication. When you confront a troll inevitably they will make false comparisons (e.g. banning Kiwi Farms is like banning talk to the effect that trans women could damage the legitimacy of women's sports just when people are starting to watch women's sports)
On top of that there are other parties involved. That anime site I mention above has no ads and runs at very low cost but has sustainability problems because it used to sell memberships but got cut off by payment providers. You might be happy to read something many find offensive but an advertiser might not want to be seen next to it. The platform might want to do something charitable but hosting offensive talk isn't it.
I can see Social Media CEO saying "well, I believe in free expression and I would otherwise want to allow this message, but I'm worried what the government will do to me if I do allow it. So instead I will block it."
When a platform blocks some content, some call it "censorship", while others say "hey, it's a private company and they can block what they want to. It's called 'moderation'" - but this may in fact not be something the platform wants to block - it's indirect government censorship.
I wonder how Elon will handle this. Will he be cowed by the government into censoring content they don't like? Or will he ignore them and take his chances?
When asked why, the first reason is he uses Substack and they don't offer this as a feature, and when he was on his own self-hosted site, he didn't have the technical skill to implement it himself. But then he says that even if he could, he wouldn't, because he wants his community to reflect a certain ethos and character that creates a community he actually wants to be a part of.
How he doesn't see the contradiction here, I don't know. But this gets at the core of the issue. Virtually nobody actually wants a free-for-all, even one that is opt-in. But whereas some blogger with a devoted following is allowed the freedom to cultivate his garden, as he has described it in the past, when you get as big as Twitter, the larger public starts to feel like it's their community and they should get to decide, not the owners, or that it's even so big as to be this "de facto public square" people keep calling it, and now it has to just follow the same rules as a government, even though it is a private platform owned by people with their own preferences for what they think the character of the platform should be.
The only fairly large platform I can ever think of that really did adopt the more or less anything that won't get us shut down by the FBI is fair game policy is 4Chan. But if everyone who is so hung up on Twitter being too restrictive is mad about Twitter's policies, 4Chan still exists. Why not just go there? You can't even say it doesn't have reach. There are plenty of users there. Meaningful real world movements have started there. The only thing you lose is the real news doesn't follow and write about 4Chan nearly as much as they do Twitter.
Probably were, since as far as I can tell it's a stunt with no real intent to destroy anything.
> and at least in one case glued themselves to a 16th century picture frame, itself a priceless cultural artefact.
That's definitely not good.
That's what makes it illegal? What if it's done on a private forum that the victim never finds out about? What if the victim is, say, dead? I don't think those change the legality.
Yeah, but I covered that: "Now, if they verbally abuse someone though (e.g. threaten their life, dox them, and such), well, that could be moderated and members who do that could be banned. The rest of opinion, whether deemed controversial, unpopular, misinformation, or bullshit, can stay."
>We whom? This has been debated on HN for as long as HN existed. Most would consider it moderation
Has it? I'm here for almost as long as HN existed, and I don't remember this being debated. It might have been debated a couple of times in 15 or more years, but it's not like it's some common HN discussion.
I also doubt "most" would consider spam the same issue as the kind of moderation we're talkin about, or that even enough people think it's the same kind of thing as moderation of ideas and opinions and such. In fact, I'd go on to say that people who care for free speech still want spam filters - and don't view this as contradictory or care about the latter.
Well... It's a lot uglier of an issue than you state.
https://www.eff.org/deeplinks/2021/02/can-government-officia...
Before getting too caught up in off topic straw-men, you may want to look at why 70k users joined Mastodon in the past 7 days.
Have a fantastic day, and here is your up-vote... and a Commodore logo since smiles may upset you. (=
Nice. ;)
When the things you think are banworthy get banned then you are fine with it, yes. Upthread you listed slurs as one of these reasons. A large number of people complaining about "censorship" do not think that using slurs or even calling people slurs is banworthy. So you'll run into that problem.
We already see people complaining about bans "based on the type of content." The idea that somehow other kinds of moderation are the problem and that if we only stop that kind then everybody will be happy is simply not based in fact.
Maybe moderation is part of that, but I’d argue the subject matter is already generally less polarizing/toxic than what’s on the other three platforms.
But your point is still valid re: how Bogleheads does moderation.
If your concern is with the labels themselves being used to convey a (possibly offensive) message, I think you could just have a way for people to hide specific labels and never see them again. Or maybe a way to label the labels as subjective, or just delete ones that are obvious flamebait.
This part is not correct. Private companies block what they believe to be illegal activities in their systems constantly - in order to limit the legal liability of being an accomplice to a crime. This is the case in all industries - and is standard practice from banking, to travel, to hotels, to retail... it's commonplace for companies to block services.
For spam, I would recommend that it gets a separate filter-flag allowing users to toggle it and see spam content, separately toggled from moderated content.
What's obviously very hard is allowing discussion of politics, macroeconomics, religion, race, etc. without it getting heated. Bogleheads doesn't even try.
In fact there is already some amount of legislation on this, for example in the EU.
It's unhealthy to just throw every difficult problem at courts; the legal system is clumsy, unresponsive, and often tends to go to unwanted extremes due to a combination of technical ignorance, social frustration, and useless theatrics.
You can't come to my house, sit on my couch, eat my food and watch my television unless I say you can. If you try to do so without my permission, I'm within my rights to throw you out and, if you resist, use force to remove you.
How is usage of a private company's private server resources any different?
People love to misuse tools meant for good, on Reddit I've been on the receiving end of the "reddit cares" self-harm notification because of some barely spicy comments.
Instead of just leaving it to users or admins to block users or servers they didn’t want to deal with, a large subset decided to block for anyone who didn’t block certain other servers.
It is, in general, really really difficult to pass speech laws in the USA because of that pesky First Amendment -- even if they're documentation of a crime. Famously, Joshua Moon of Kiwi Farms gleefully hosted the footage from the Christchurch shooting even when the actual Kiwis demanded its removal.
But if you can argue that procurement or distribution of the original material perpetuates the original crime, that is, if it constitutes criminal activity beyond speech -- then you can justify criminalizing such procurement or distribution. It's flimsy (and that makes it prone to potentially being overturned by some madlad Supreme Court in the future with zero fucks to give about the social blowbacks), but it does the job.
In other countries it's easy to pass laws banning speech based on its potential for ill social effects. Nazi propaganda and lolicon manga are criminalized in other countries, but still legal in the USA because they're victimless.
If this makes you wonder whether it's time to re-evaluate the First Amendment -- yes. Yes, it is.
There were some grassroots efforts around 2015 to make the mod log public and transparent (so it'd say what was removed, by who, and optionally why), but it was unfortunately opt-in and never gained large adoption.
It's a common tactic to use citations to get the person you are arguing with to walk in circles. It's a war of attrition: eventually the other party gives up on deconstructing and criticizing your citations, and you claim victory. This is closely related to the "ball is in your court" fallacy.
But if both parties are actually invested in critical thought, citations can be an opportunity instead of a roadblock. That still requires the effort of everyone involved.
Regardless to your personal stance on this issue, the mass of consumers were fine with it and therefore demand has dictated largely what's reasonably available to your average citizen--phones with difficult to swap batteries. There are still some options but they're scarce and require tradeoffs from most flagship phones.
That's with something less (yet increasingly) significant as a smart phone. When your product is information such as books, or education, such as in universities, suddenly you have markets and the masses dictating what's available and indirectly censoring content. Heck, it happens in science all the time these days. There are dominant groups and names in fields who hold significant sway, they often influence funding agencies and ultimately influence where scientists can viably perform research (unless they can self fund their work).
But yes, markets can and do censor based on demand. I don't blame a small bookstore owner in this context for censoring, it's a systemic issue they have little power over. As you point out they have to pick and choose based on demand signal, they're running a business after all, yet if the only source of the information can be obtained through bookstores, suddenly markets are indirectly dictating to bookstores who indirectly dictate to consumers what information is available.
Sometimes we want this effect, we want markets to help us pressure and bubble up certain products or solutions to the top, other times we might not want this to be the case (as in free speech and flow of information).
Do you think that’s what we’re discussing here? Fascinating.
If I had a tool that could (at least attempt to) filter out anti-semitism or Holocaust denial, then Germany could have that set to "on" to comply with the law. I'm all for democracies deciding what laws they want.
[1] https://www.reuters.com/article/us-germany-hatecrime-idUSKBN...
> Laws don't enforce themselves.
What has that got to do with Twitter? Please try to stay on track.
What "godlike power" are you referring to? The ability to moderate what turns up in your own social media feed? The ability to respond to comments someone else has deemed to break rules. I would hope for a bit more than that for godlike power.
Who would be put upon by this? The average user doesn't have to be, they could use the default settings which are very anodyne. The rest of us get what we want, that's what the article stated. Who's finding this a burden?
As to the reality of things, Twitter's just been bought for billions and there's plenty of bullshit being posted there. That's the reality, and several people who've made a lot of money by working out how to balance value and costs think it can do better.
If you push that argument to the extreme, why should anyone be allowed to publish anything in our county? They can always go someplace else and speak their mind, but we don't want it around here.
What I'm saying is that exists a gradient of power between the dictatorial (state censorship) and inconsequential (your couch), and we have a social contract that allows the same suppression of speech in the latter but disallows it in the former. We call the consequential type "censorship" but it's the same basic action, there is a gray area where private agents working under the authority of the state can be just as powerful censors as the state itself.
For example, if private banks deny services to a newspaper hostile to the government, I could take your argument and spin it around, they are "free" to publish on their own pocket cash, private banks are not forced to "offer a platform". But we clearly understand that publication will lose advertisers, fail commercial and the interests behind the banking ban will have succeeded in suppressing free speech - suggesting them to distribute leaflets will not help.
The cut-off point where private suppression becomes consequential censorship is, in my opinion, when the gatekeepers of speech are centralized and oligopolistic, like for example social media and, unlike for example, traditional print media. A single publication denying publication is perfectly fine, as long as others exists and have reasonable similar access to distributions channels. With the death of traditional print media and the highly concentrated nature of the visual media and internet space, this less the case.
You can of course publish on your blog with zero traffic, but you are effectively shut out of the relevant distribution channels, you are effectively distributing leaflets in your corner of the street.
You make it sound like I'd just said something is "fine" because it's to my taste - no matter how bad it might be otherwise. I think the snark is a little misplaced, though, as one could say exactly the same if they proposed an (objectively) good or perfect or best-compromise solution.
So what matters is whether it's actually that: a good solution. Not whether it's to the taste of the person proposing it (which, any solution would always be). So at best the snark above is based on a truism/tautology.
>A large number of people complaining about "censorship" do not think that using slurs or even calling people slurs is banworthy. So you'll run into that problem.
Here's the thing: I'm not sure it's that big of a number of people. I'm also pretty sure "a number of people" also think spam, cp, violent threats are not banworthy, but I don't think it's "a large number" either.
Which is why I think banning slurs, spam, and other such things is OK, and doesn't have to do with freedom of speech - you can still express the same ideas, even the most unpopular and controversial ones, without slurs, spam, cp, and so on.
>We already see people complaining about bans "based on the type of content."
Some people will complain about anything and everything - I'm sure that some are even against the invention of fire or in favor of farting in elevators. Satisfying everyone can't ever be the measure of a good proposal.
The best solution is about a good compromise that doesn't hurt the core issue of free speech, and not only doesn't stiffle, but even helps discussion (e.g. you can't have free speech if you get death threats for it, as people will be afraid to speak - so banning "violent threats" content makes sense. Similarly, you can't have free speech if the forum is filled with advertising spam and penis enlargement and "get rich quick ads". So banning spam will help the discussion, not stiffle it).
That's something that I think is seriously wrong with the USA right now: the idea of an "arrest record", or at least the idea of it being accessible by anyone other than the police.
There are a number of situations where it is perfectly reasonable to arrest innocent people, then drop all charges. Let's say the cop arrive at a crime scene, there's a man on the ground lying in a pool of blood, and another man standing with a smoking gun holstered at their hip. Surely it would be reasonable to arrest the man that's still standing and confiscate their gun, at least for the time necessary to establish the facts?
But then once all charges has been cleared (say the dead guy had a knife and witnesses identify him as the aggressor), that arrest should be seen as nothing more as either a mistake or a necessary precaution. It's none of potential employer's business. In fact, I'd go as far as make it illegal to even ask for arrest records, or discriminate on that basis.
Criminal records of course are another matter.
As far as I understand in the EU private information is part of the self. Thus, manipulating, exchanging, dealing with private information without the person's consent is by itself a kind of aggression or violation of their rights. Even if the person never finds out.
In the USA however private information is an asset. The aggression or violation of right only happens when it actually damages the victim's finances. So if the victim never finds out about discussions happening somewhere else in the world, well… no harm done I guess?
Both views are a little extreme in my opinion, but the correct view (rights are only violated once the victim's own life has been affected in any way), is next to impossible to establish: in many cases the chain of events that can eventually affect a person's life is impossible to trace. Because of that I tend to suggest caution, and lean towards the EU side of the issue.
Especially if it's the documentation of a crime as heinous as child abuse.
I submit that spam filters should be under the sole control of their end users. If I'm using a Yahoo or Gmail account (I'm not) I should have the option to disable the spam filter entirely, or to only use personal parameters that are trained on the mail only I received, and not email should ever be summarily blackholed without letting me know in some way. If an email bounces, the sender should know. If it's just filtered, it should be in the recipient's spam folder.
Exactly. And no one. Zero people. No person or entity is required to provide you with a platform, megaphone or audience.
That doesn't stop you from saying what you want. Freedom to express yourself does not entitle you to an audience. Full stop.
Edit: To clarify, your free speech rights do not trump my free speech (which includes not hosting your speech on my private property). Yes, today's social media has inordinate influence due to network effects. But those are for-profit corporations who owe you nothing.
Get that through your head. They owe you nothing.
Personally, I despise those corporations. And I voted with my feet and wallet and left nearly a decade ago. But my distaste for them and their business models doesn't trump their free speech and property rights. Nor should it.
Your rights do not supersede those of others, except on your private property. Facebook's (or Twitter or YouTube, etc., etc., etc,.) servers are their private property.
Want a public square? Then set up a public square. Those corporate, for-profit entities are not that.
Regardless of moderation or censorship, a public square would/should operate quite differently.
If you want to be abrasive on tangents, you can definitely be that guy, but that was not the subject, rather the nature of censorship. You completely disregard my example of iligitemate private censorship by the banks, intended to point out the problem, only to forcefully restate a extremist ideological position that doesn't really function in any true society. Ok...
"Specific Person is a real jerk! And they probably eat live frogs for fun too! What a loser!" - As long as Specific Person can effectively ignore this - meaning it's not forced into their feed, overwhelming their inbox, etc - then that's probably fine, even if untrue.
"Specific Person might live near XYZ. Definitely DON'T put any turds in their mailbox ;-) " - Pretty clearly an invitation to real-world harassment, which is not okay and should not be abetted by any platform.
Gnosis is lonely
>If you want to be abrasive on tangents, you can definitely be that guy, but that was not the subject, rather the nature of censorship. You completely disregard my example of iligitemate private censorship by the banks, intended to point out the problem, only to forcefully restate a extremist ideological position that doesn't really function in any true society. Ok...
Abrasive or not, it's not a tangent. It's the central point.
I didn't disregard anything -- rather, I didn't address the tangent you were off on.
Yes, censorship is bad. There. I addressed your tangent.
However, mine is not an extremist position at all. Freedom of expression and property rights are core elements of Western civilization.
Clearly, we're talking past each other. Which is too bad.
But I'll restate my main thesis once more: You can say whatever you want. But you are not entitled to an audience. And here's the proof.
Look at you, trying to act like the internet is not centralized these days.
Get back to me in a few weeks when no one is using what ever the heck Mastodon is.
I'm not saying that every time I hear about someone being banned its for these reasons but its often enough that I become suspicious it may be someone trying to argue for "The Bell Curve" or that slaves were happy or some other ridiculous racist idea.
Tons of people with >1Million views are trolls and extremely bad faith actors. That's often how they became so popular. Look at Tucker Carlson. If you don't ban them you may be encouraging others to act in bad faith to gain popularity.
If you don't ban Alex Jones you are letting him spread his conspiratorial racist views and harassment campaigns on your platform.
The first one you can pass off to a third party - the seconds one is trouble that you can't avoid.
This is 99% of Reddit though.
More importantly I guess your main point is multi-billion dollar company mostly cares about money. To which I would say sure, what else is new? But maybe this is news to some people.