Participate in a well informed debate on monetary policy, but some idiot downthread went on an anti-semitic rant?
Your account will be banned. Your ip address will be blocked from creating additional accounts. You will receive a link in a message to the message you wrote for which you were banned, but since it was deleted it will be a worthless link. You will receive a link to a form to appeal your ban, which goes straight to dev/null.
Worse, they use browser fingerprinting AND IP.
On the one hand, this is fine: Reddit is supposed to be a collection of independently moderated sub-communities with their own rules and administration. On the other hand, you have a unified identity and content history across those communities, so it's a lot easier for one community to take action based on your history in another, which is a strange dynamic.
I actually think Facebook Groups are onto something with the way post history and profiles work: each Facebook Group a user posts in creates a separate sub-profile for that user which is specific to the Group. Users in that Group can see a user's post history in that Group, and that user's "main" profile depending on their privacy settings, but a user can't walk "across" to see a user's post history in other Groups unless they search from that other Group.
I feel like per-subreddit post histories along with a global user profile would help move Reddit more towards the "sub-community" vision if that's the direction they want to go.
The issues Reddit have are:
* Cross-stalking, as discussed above.
* Content discovery. This is the same problem every user-generated content platform has. What sub-communities get surfaced on the logged-out front page? Cross-pollinated to existing users? Every type of content will be objectionable to someone, so deciding what to show is always going to be a lightning-rod issue with advertiser dollars at stake.
* Global moderation. What's "bad" enough to get a user banned from _all_ of Reddit? What happens when that user is completely banned (do all of their old posts disappear?) Should large-scale content moderation like spam be handled at a platform or a community level?
It's your email, social account, ip/location, browser fingerprint info, search terms, information from their partners (ad networks, apps) and cookies, subs you visit, what you upvote/downvote/save/report, which page on reddit you're coming from/going to, etc. They use these to then determine blocks/shadowbans/counteract your votes and so on.
Has this resulted in a substantial quality increase on reddit? Oh absolutely not, you'll get chatgpt bots, people harassing you, completely unrelated comments, report abuse, etc. but they'll never give up that much data.