Power to the people.
So you want a moderator to moderate. but then you also want to have tools to see what has been moderated away and unlock those? Right? So moderate yes, but also unmoderate by the users.
Power to the people!
Don't trivialize it as some personal preference around moods. It's much more than that.
Stuff like death threats, doxxing, child porn, harassment are not just "moods you don't like".
>> Give me the tools that the moderators have
Whatever tools a site like twitter or youtube gives you, (A) most people will never use them and (B) they still control how the tools work. These two are enough to achieve any censorship goal you might have, and enough to make censorship inevitable.
I don't think we get power to the people while Alphabet/Elon/Whatnot own the platform. It's a shame that FOSS failed on these fronts. But, the internet has produced powerful proofs of concept. The WWW itself, for the first 20 years. Wikipedia. Linux/gnu. Those really did give power to the people, and you can see how much better they were at dealing with censorship, disinformation and other 2020s infopolitics.
That is the wrong view on a global communication platform. It's like saying "a certain tone sets the mood for the entire telephone system".
These things should be seen more as silos, subcultures or whatever.
Unless you expose yourself to the firehose of globally popular content.
Anyway, analogies are imperfect, please look in the direction where I am gesturing, not at my exact words.
The point here (and of the entire conversation) is that you shouldn't judge a medium by its worst imaginable actors as long as you're given the right tools that allow you to use that medium undisturbed, effectively putting them into a different silo. Today twitter allows a very crude, imperfect approximation of this by following people that post decent content and setting the homepage to "latest posts" instead of "top tweets". Ideally we'd have better tools than that.
Contextual filters/scanners would score a piece of content, give it a "score" based on what ever categorizations are being filtered (NSFW, Non-Inclusive Lang, Slurs, Disinfo, etc)
Then both the creator and the consumer should be able to see the score in transparent manner, with the consumer being able to set a threshold to filter out any post that is higher then what they choose
Free Speech Absolutist could set it to 0, Default could be 50, and go from there
Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?
I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".
That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them
My apologies.
> So you want a moderator to moderate.
I don't care whether they continue to moderate centrally but it would suit those who do.
> but then you also want to have tools
Yes.
> to see what has been moderated away and unlock those?
Yes.
If an app you download has settings but they are either:
a) only available to the developers or company
b) the defaults always override your settings
would you be happy? Why, you might ask, do you not get access to the settings and to set them as you wish?
Scores across a range of measures would be best, in my view.
On second thought.. I suppose that's Tiktok.
In addition crap floods? If I submit half a billion posts do you really want that handled by moderation?
Being a server operator I've seen how bad the internet actually sucks, this may be something the user base does not have to experience directly. When 99.9% of incoming attempts are dropped or ban listed you start to learn how big the problem can be.
Wikipedia have a model for user generated content. It's much more resilient, open, unbiased and successful than social media. This isn't because they have some super nuanced, single-us distinction between moderation and censorship. They never really needed to split that hair.
They have a model for collaboratively editing an encyclopedia, including lots of details and special cases that deal with disagreement, discontent and ideological battlegrounds.
They also have a different organisational and power structure. Wikipedia doesn't exist to sell ads, or some other purpose above the creation of an encyclopedia. Users/editors have a lot of power. Things happen in the open.
Between those two, they've done much better than Alphabet/FB/Twitter/etc. Wikipedia is the ultimate prize for censorship, narrative wars, disinformation, campaigning, activism and such. Despite this, and despite far fewer resources it outperforms the commercial platforms. I don't think it's a coincidence.
If moderation must be done then let me do it for myself. Give me the tools.
A central moderating authority cannot be trusted at all.
It all comes down to some guy telling me how to talk. I don't like it. Anybody who likes it has rocks in his head.
What you want is someone else's audience, and I'm not exactly sure what makes you think you have the right to that?
https://hn.algolia.com/settings
dang often references past discussions with search links, so here's a good starting point: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=7&prefix=true&que...
As to "many cases where online communities document or facilitate crimes elsewhere", why criminalise the speech if the action is already criminalised?
That leaves only "Campaigns to harass individuals and groups". Why wouldn't moderation tools as powerful as the ones employed by Twitter's own moderators deal with that?
[1] https://mtsu.edu/first-amendment/article/970/incitement-to-i...
Since I was not born with a language, yes I've been told how to talk for a sizeable portion of my life.
In fact learning things like tact and politeness, especially as it relates to the society I live in, has been monumental in my success.
Do you go to your parents house and tell them to screw off? Do you go to work and open your mouth like a raging dumpster fire? Do you have no filter talking to your husband/wife/significant other? Simply put your addition to the discussion is that of an impudent child. "I want everything and I want to give up nothing" is how an individual becomes an outcast, and I severely doubt this is how you actually live outside the magical place known as the internet, though I may be surprised.
Let's assume that you are not a child, that you are confident in your ability to manage your snark and, most of all, highly value your conversation.
I'm going to conclude that yes, you definitely dislike being told how to talk.
As to moderation, why not be able to filter by several factors, like "confidence level this account is a spammer"? Or perhaps "limit tweets to X number per account", or "filter by chattiness". I have some accounts I follow (not on Twitter, I haven't used it logged in in years) that post a lot, I wish I could turn down the volume, so to speak.
That is, it's not clear in the US you can ban something on the basis of it being immoral, you need to have the justification that it is "documentation of a crime".
Spam may still leak into our inboxes today, but the level of user control over email spam is generally a stable equilibrium, the level of outrage around spam filters — and to be clear, there are arguments to be made that spam filters are increasingly biased — is much MUCH lower than that around platform "censorship".
What is spam... exactly? Especially when it comes to a 'generalized' forum. I mean would talking about Kanye be spam or not? It's this way with all celebrities, talking about them increases engagement and drives new business.
Are influencers advertising?
Confidence systems commonly fail across large generalized populations with focused subpopulations. Said subpopulations tend to be adversely affected by moderation because their use of communication differs from the generalized form.
And protection of victim rights, I suppose.
Anime image boards are not in a hurry to expunge "lolicon" images because they don't face any consequence from having them.
I wouldn't blame Tumbler from banning ero images a few years back because ero image of real people are a lot of trouble. You have child porn, revenge porn, etc. Pornography produced by professionals has documentation about provenance (every performer showed somebody their driver's license, birth certificate, probably got issued a 1099) if this was applied to people posting images from the wild they would say people's privacy is being violated.
No.
> Are influencers advertising?
That spam is advertising does not make all advertising spam.
We already have filters based on confidence for spam via email, with user feedback involvement too, so I don't need to define it, users of the service can define it for me.
I also get a bit tired of looking someone up and it has "so and so says this person is <insert bad thing>", claims that usually stack up about as well as that SPLC claim against Maajid Nawaz[1] did.
Given this, I find it hard to see how they're doing better than the other companies you mention.
[1] https://en.wikipedia.org/wiki/Majid_Nawaz#Claim_by_Southern_...
The vast majority of removed comments are made to shape the conversations.
I think most people would be ok with letting admins remove illegal content, while allowing moderators shape content, as long as users could opt-in to seeing content the mods censored.
This is a win-win. If people don't want to see content they feel is offensive, they don't have to.
Let the user decide.
These are global platforms with global membership, simply stating that “if it is free speech in America it should be allowed” isn’t a workable concept.
The company that provides the service defines the moderation because the company pays for the servers. If you start posting 'bullshit' that doesn't eventually pay for the servers and/or drives users away money will be the moderator. There is no magic free servers out there in the world capable of unlimited space and processing power.
When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads?
I'll give up some of my freedom, to limit everyone else's freedom every day of the week - the only concern I have is roughly the IQ of the people doing the limiting.
Of course. That is what they've demanded, so that is what they get.
> "When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. "
On the contrary: You must have this. As a matter of law. There is no alternative, other than withdrawing from those countries entirely and ignoring the issue of people accessing your site anyway (which is what happens in certain extreme situations, states under sanctions, etc)
> " It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads? "
Here are the options:
1) Do not do business in those countries.
2) Provide different services for those countries to reflect their legal requirements.
There is no way to provide a globally consistent experience because laws are often in mutual conflict (one state will for example prohibit discussion of homosexuality and another state will prohibit discriminating on the basis of sexual preference)
I may be unique in this regard, but I am aware of the fact that sometimes I make mistakes, and I don’t highly value all of my conversation, sometimes I just rant, or enjoy engaging in more superfluous conversation. Sometimes the best conversations aren’t “highly valuable”!
Nobody is "telling you how to talk." People are free to choose their terms for voluntary social interactions. You don't have a right to inflict yourself on others who wish not to interact with you.
This does not stop the FBI from being a major child porn distributor, despite that meaning the FBI is re-abusing thousands of victims under this rubric.
What you can enforce is "so and so says it is illegal" (accurate 90% or 99% or 99.9% of the time but not 100%) or some boundary that is so far away from illegal that you never have to use the ultimate truth procedure. The same approach works against civil lawsuits, boycotts and other pressure which can be brought to bear.
I think of a certain anime image board which contains content so offensive it can't even host ads for porn that stopped taking images of cosplayers or any real life people because it eliminated moderation problems that otherwise would be difficult.
There is also spam (should spam filters for email be banned because the violate the free speech of spammers?) and other forms of disingenuous communication. When you confront a troll inevitably they will make false comparisons (e.g. banning Kiwi Farms is like banning talk to the effect that trans women could damage the legitimacy of women's sports just when people are starting to watch women's sports)
On top of that there are other parties involved. That anime site I mention above has no ads and runs at very low cost but has sustainability problems because it used to sell memberships but got cut off by payment providers. You might be happy to read something many find offensive but an advertiser might not want to be seen next to it. The platform might want to do something charitable but hosting offensive talk isn't it.
That's what makes it illegal? What if it's done on a private forum that the victim never finds out about? What if the victim is, say, dead? I don't think those change the legality.
This part is not correct. Private companies block what they believe to be illegal activities in their systems constantly - in order to limit the legal liability of being an accomplice to a crime. This is the case in all industries - and is standard practice from banking, to travel, to hotels, to retail... it's commonplace for companies to block services.
For spam, I would recommend that it gets a separate filter-flag allowing users to toggle it and see spam content, separately toggled from moderated content.
It's unhealthy to just throw every difficult problem at courts; the legal system is clumsy, unresponsive, and often tends to go to unwanted extremes due to a combination of technical ignorance, social frustration, and useless theatrics.
Instead of just leaving it to users or admins to block users or servers they didn’t want to deal with, a large subset decided to block for anyone who didn’t block certain other servers.
It is, in general, really really difficult to pass speech laws in the USA because of that pesky First Amendment -- even if they're documentation of a crime. Famously, Joshua Moon of Kiwi Farms gleefully hosted the footage from the Christchurch shooting even when the actual Kiwis demanded its removal.
But if you can argue that procurement or distribution of the original material perpetuates the original crime, that is, if it constitutes criminal activity beyond speech -- then you can justify criminalizing such procurement or distribution. It's flimsy (and that makes it prone to potentially being overturned by some madlad Supreme Court in the future with zero fucks to give about the social blowbacks), but it does the job.
In other countries it's easy to pass laws banning speech based on its potential for ill social effects. Nazi propaganda and lolicon manga are criminalized in other countries, but still legal in the USA because they're victimless.
If this makes you wonder whether it's time to re-evaluate the First Amendment -- yes. Yes, it is.
If I had a tool that could (at least attempt to) filter out anti-semitism or Holocaust denial, then Germany could have that set to "on" to comply with the law. I'm all for democracies deciding what laws they want.
[1] https://www.reuters.com/article/us-germany-hatecrime-idUSKBN...
> Laws don't enforce themselves.
What has that got to do with Twitter? Please try to stay on track.
What "godlike power" are you referring to? The ability to moderate what turns up in your own social media feed? The ability to respond to comments someone else has deemed to break rules. I would hope for a bit more than that for godlike power.
Who would be put upon by this? The average user doesn't have to be, they could use the default settings which are very anodyne. The rest of us get what we want, that's what the article stated. Who's finding this a burden?
As to the reality of things, Twitter's just been bought for billions and there's plenty of bullshit being posted there. That's the reality, and several people who've made a lot of money by working out how to balance value and costs think it can do better.
That's something that I think is seriously wrong with the USA right now: the idea of an "arrest record", or at least the idea of it being accessible by anyone other than the police.
There are a number of situations where it is perfectly reasonable to arrest innocent people, then drop all charges. Let's say the cop arrive at a crime scene, there's a man on the ground lying in a pool of blood, and another man standing with a smoking gun holstered at their hip. Surely it would be reasonable to arrest the man that's still standing and confiscate their gun, at least for the time necessary to establish the facts?
But then once all charges has been cleared (say the dead guy had a knife and witnesses identify him as the aggressor), that arrest should be seen as nothing more as either a mistake or a necessary precaution. It's none of potential employer's business. In fact, I'd go as far as make it illegal to even ask for arrest records, or discriminate on that basis.
Criminal records of course are another matter.
As far as I understand in the EU private information is part of the self. Thus, manipulating, exchanging, dealing with private information without the person's consent is by itself a kind of aggression or violation of their rights. Even if the person never finds out.
In the USA however private information is an asset. The aggression or violation of right only happens when it actually damages the victim's finances. So if the victim never finds out about discussions happening somewhere else in the world, well… no harm done I guess?
Both views are a little extreme in my opinion, but the correct view (rights are only violated once the victim's own life has been affected in any way), is next to impossible to establish: in many cases the chain of events that can eventually affect a person's life is impossible to trace. Because of that I tend to suggest caution, and lean towards the EU side of the issue.
Especially if it's the documentation of a crime as heinous as child abuse.
I submit that spam filters should be under the sole control of their end users. If I'm using a Yahoo or Gmail account (I'm not) I should have the option to disable the spam filter entirely, or to only use personal parameters that are trained on the mail only I received, and not email should ever be summarily blackholed without letting me know in some way. If an email bounces, the sender should know. If it's just filtered, it should be in the recipient's spam folder.