Here's one tip for you guys, from years-long, world-weary experience: if you're coming up with sensational explanations in breathless excitement, it's almost certainly untrue.
Edit: ok, here's what happened. Users flagged https://news.ycombinator.com/item?id=27394925. When you see [flagged] on a submission, you should assume users flagged it because with rare exceptions, that's always why.
A moderator saw that, but didn't look very closely and thought "yeah that's probably garden-variety controversy/drama" and left the flags on. No moderator saw any of the other posts until I woke up, turned on HN, and—surprise!—saw the latest $outrage.
Software marked https://news.ycombinator.com/item?id=27395028 a dupe for the rather esoteric reasons explained here: https://news.ycombinator.com/item?id=27397622. After that, the current post got upvoted to the front page, where it remains.
In other words, nothing was co-ordinated and the dots weren't connected. This was just the usual stochastic churn that generates HN. Most days it generates the HN you're used to and some days (quite a few days actually) it generates the next outlier, but that's how stochastics work, yes? If you're a boat on a choppy sea, sometimes some waves slosh into the boat. If you're a wiggly graph, sometimes the graph goes above a line.
If I put myself in suspicious shoes, I can come up with objections to the above, but I can also answer them pretty simply: this entire thing was a combo of two data points, one borderline human error [1] and one software false positive. We don't know how to make software that doesn't do false positives and we don't know how to make humans that don't do errors. And we don't know how to make those things not happen at the same time sometimes. This is what imperfect systems do, so it's not clear to me what needs changing. If you think something needs changing, I'm happy to hear it, but please make it obvious how you're not asking for a perfect system, because I'm afraid that's not an option.
[1] I will stick up for my teammate and say that this point is arguable; I might well have made the same call and it's far from obvious that it was the wrong call at the time. But we don't need that for this particular answer, so I'll let that bit go.
https://news.ycombinator.com/item?id=27394925 [Flagged]
https://news.ycombinator.com/item?id=27395028 [Marked as dupe]
https://news.ycombinator.com/item?id=27394943 [Currently on page 2]
The reason for distrust is valid. We live in an age of rapidly increasing censorship and the CCPs growing reach of control in American discourse. Skepticism is becoming the default for very real reasons.
[flagged] on submissions nearly always means users flagged it. This is in the FAQ: https://news.ycombinator.com/newsfaq.html#flag
But at the same time, it also seems like flagging can be too easily abused, and can lead to accusations of censorship and distrust. (Though I've certainly seen it work well in cases, especially for false/defamatory articles.)
But it really does seem like we're at the point where longstanding users need to also be able to vouch for flagged stories, or something like that. And even if that doesn't automatically restore the story, it could at least show a label like "pending moderator decision" or something.
At a time where trust in the media and authority is low... a little bit of greater transparency might go a long way. :)
Edit: oh - I think that one was actually marked a [dupe] by software. I'd need to double check this, but if so, it's because it interpreted the link to the other thread as a signal of dupiness.
Edit 2: yes, that's what happened. When a submission is heavily flagged and there is a single comment pointing to a different HN thread, the software interprets that as a strong signal of dupiness and puts dupe on the submission. It actually works super well most of the time. In this case it backfired because the comment was arguing the opposite.
You don't have to believe me, of course, but if you decide not to, consider these two simple observations.
First, lying would be stupid, because the good faith of the community is literally the only thing that makes this site valuable. So, sheer self-interest plus not-being-an-idiot should be enough to tip your priors. I may be an idiot about most things, but I hope I'm not incompetent at the most important part of my job. The value of a place like HN can easily disappear in one false step. Therefore the only policy which has ever made any sense is (1) tell the truth; (2) try never to do anything that isn't defensible to the community; and (3) acknowledge when we fuck up and fix it.
Second, if you're going to draw dramatic conclusions about sinister operations, it's good for mental health to have at least one really solid piece of information you can check them against. Otherwise you end up in the wilderness of mirrors. What you see on internet forums—or rather, what you think you see on internet forums, which then somehow becomes what you see because that's how the brain does it—is simply not solid information. Remember what von Neumann said about fitting an elephant? (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) He asked for a mere five degrees of freedom. Nebulous internet spaces give you hundreds at least. That's way beyond enough to justify anything—even dipping in a ladle and getting one ladle's worth is enough to justify anything.
(Edit: people have been asking what Angela Lansbury has to do with this. If you don't mind spoilers, Angela will explain it for you here: https://www.youtube.com/watch?v=p3ZnaRMhD_A.)
Isn't this already a thing with the "vouch" button on said posts? https://i.imgur.com/Hp9nu58.png
My question is: does HN actively attempt to counteract government actors from influencing the site? I think it’s been proven that China among other countries employs folks to try to influence social media sites. Not necessarily by influencing staff, but by creating user accounts who do things like downvote unfavorable comments or flag stories they don’t like.
This seems like it would be a prime target for that behavior.
Thanks for your consistently even-handed and dedicated moderation efforts sir.
I guess there is also a "flagging brigade" detector. [If not, I upgrade this comment to a feature request.]
Why would you mention that? It’s very suspicious!
Quote from Satya Nadella Q1 2019 Earnings Conference Call "...In fact, this morning, I was reading a news article in Hacker News, which is a community where we have been working hard to make sure that Azure is growing in popularity and I was pleasantly surprised to see that we have made a lot of progress in some sense that at least basically said that we are neck to neck with Amazon when it comes to even lead developers as represented in that community..."
Mentioned here before: https://news.ycombinator.com/item?id=27293480
[dead]: https://news.ycombinator.com/item?id=27397440 (has vouch)
[flagged]: https://news.ycombinator.com/item?id=27396685 (no vouch)
EDIT: The "flagging trustworthiness" could even help mods to find posts which might need to be unflagged quicker based on the average trustworthiness of the flags.
However, if any executive is getting graded against this metric, Goodhart’s law applies, and there’s a good chance astroturfing would happen. Satya probably wouldn’t know about it.
If a Hollywood CEO says that they are trying to raise the audience Cinemascore ratings of their movies, we’d interpret that to mean that they are trying to make audience-friendly movies, not that they are trying to astroturf Cinemascore. And similarly, if someone at the studio were astroturfing Cinemascore, the CEO wouldn’t talk about it on the earnings call.
You're right that most such software tricks, especially anti-abuse measures, need to be secret in order to stay working.
Of course, do not attribute to conspiracy that which can be attributed to a bug! ;-)
This is the critical point. Today, users can "vouch" for [dead] stories, but can't vouch for [flagged] stories until they get flagged so much that they convert to [dead].
The other "Tank Man" story was flagged, but never quite dead, so users couldn't vouch for it; from users' perspective, it appeared to simply disappear.
Allowing users to vouch for the other story would have helped considerably.
Bedknobs and Broomsticks got you too... ;-)
This trend of "stop Asian hate" is also not organic. It's designed to use the "your racist" Trump card to shut down any talk of the lab leak or China's response
The better HN gets, the more people want to suck its juices for their own purposes. Most haven't figured out that the above-board way to do that is simply to make interesting contributions, so they do other things, and there's probably a power law of how sinister those things are. The majority are relatively innocuous, but lame. (Think startups getting their friends to upvote their blog post, or posting booster comments in their thread.)
Users are good at spotting these innocuous/lame forms of abuse, but when it comes to $BigCo manipulation (or alleged manipulation), user perceptions get wildly inaccurate—far below 0.1%—and when it comes to $NationState manipulation (or alleged manipulation), user perceptions get so inaccurate that...trying to measure how inaccurate they are is not possible with classical physics. Almost everything that people think they're seeing about this is merely imagination and projection, determined by the strong feelings that dominate politics.
How do I know that? Because when we dig into the data of the actual cases, we find is that it's basically all garden-variety internet user behavior.
It's like this: imagine you were digging in your garden for underground surveillance devices. Why? Well, a lot of people are worried about them. So you dig and what do you find? Dirt, roots, and worms. The next time you dig, you find more dirt and more roots and more worms. And so for the next thousand places you dig. Now suppose someone comes along and insists that you dig in this-other-place-over-here because they've convinced themselves—I mean absolutely convinced themselves, to the point that they send distraught emails saying "my continued use of HN depends on how you answer this email"—that here is where the underground device surely must be. You've learned how important it is to be willing to dig; even just somebody-being-worried is a valid reason to dig. So you pick up your shovel and dig in that spot, and you find dirt, roots, and worms.
Still with me? Ok. Now: what are the odds that this thing that looks like a root or a worm is actually a surveillance device? Here my analogy breaks down a bit because we can't actually cut them open to see what's inside—we don't have that data. We do, however, have lots of history about what the "worms" have been doing over the years. And when you look at that, what do you find that they've been up to? They've been commenting about (say) the latest Julia release or parser combinators in Elixir, and they've been on HN for years and some old comment talks about, say, some diner in Wisconsin that used to make the best burgers. And in 2020 they maybe got mad on one side or the other of a flamewar about BLM. (Nobody please get mad that I'm using worms to represent HN users. It's just an analogy, and I like worms.)
Or, maybe the history shows that the person gets involved in arguments about China a lot. Aha! Now we have our Chinese spy! How much are they paying you? Is it still 50 cents? I guess the CCP says inflation doesn't exist in China—is that it, shill? If @dang doesn't ban you, that proves he's a CCP agent too!
But then you look and you see that they've been in other threads too, and a previous comment talks about being a grad student in ML, or about having married someone of Chinese background—obvious human stuff which fully explains why they're commenting the way they are and why they get triggered by what they get triggered by.
This ordinary, garden-variety stuff—dirt, roots, and worms in the analogy—is what essentially all of the data reduces to. And here's the thing: you, or anyone, can check most of this yourself, simply by following the public history of the HN accounts you encounter in the threads. The people jumping to sinister conclusions and angrily accusing others don't tend to do that, because that state of mind doesn't want to look for countervailing information. But if you actually look, what you're going to find in most cases is enough countervailing information to make the accusations appear absurd...and then you'd feel pretty sheepish about making them.
I'm not saying the public record is the entire record; of course it isn't. We can look at voting histories, flagging histories, site access patterns, and plenty of other things that aren't public. What I'm saying is that, with rare exceptions [1], what we find after investigation of the private data is...dirt, roots, and worms. It looks exactly like the public data.
And here's the most important point: the accusations about spying, brigading, shilling, astroturfing, troll farms, and so on, are all exactly the same between the cases where the public data refutes them and the cases where the public data is inconclusive. I realize this is a subtle point, but if you stop and think about it, it's arguably the strongest evidence of all. It proves that whatever mechanism is generating these accusations doesn't vary with the actual data. Moreover, you don't need access to any private data to see this.
There are also trolls and single-purpose accounts that only comment in order to push some agenda. That's against the HN guidelines, of course, and such accounts are easy enough to ban. But even in such cases, it doesn't follow that the account is disingenuous, some sort of foreign agent, etc. It's far more likely that they're simply passionate on that topic. That's how people are.
[1] so rare that it's misleading to even mention them, and which also don't look anything like what people imagine
---
Still, power laws have long tails and one wonders what may lie at the end, beyond our ability to detect it. What if despite all of the above, there is still sinister manipulation happening, only it's clever enough to leave no traces in the data that we know of? You can't prove that's not happening, right? And if anyone is doing that it would probably be state actors, right?
You might think there's nothing much to be said about such cases because what can you say about something you by definition don't know and can't observe? It seems to get epistemological pretty quickly. Actually, though, there's a lot we can say, because the premise in the question is so strong that it implies a lot. The premise is that there's a sort of Cartesian evil genius among us, sowing sinister seeds for evil ends. I call this the Sufficiently Smart Manipulator (SSM): https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so....
There are two interesting things about the SSM scenario. The first is that since, by definition, the SSM is immune to anti-abuse measures, you can't postulate any technical measures for dealing with it. It's beyond the end-of-the-road of technical cleverness.
The second interesting thing is that, if you go in for this way of thinking, then either there already exists an SSM or there eventually will be one. And there's not much difference between those two cases. Either way, we should be thinking about what to do.
What should we do in the presence of an SSM? I can think of two options: either (1) give up, roll over, and accept being manipulated; or (2) develop a robust culture of countering bad arguments with better ones and false claims with true information. Of those options, (2) is better.
If you have such a culture, then the SSM is mitigated because the immune system will dispose of the bad parts of what they're saying. If there are any true bits in what they're saying, well, we shouldn't be rejecting those, just because of who said them. We should be big enough to accommodate everything that's true, regardless of where it comes from—just as we should reject everything that's false, regardless of where it comes from. We might prefer to reject it a little more rudely if we knew that it was coming from an SSM, but that's not a must-have.
The nice thing is that such a culture is exactly what we want on HN anyway, whether an SSM exists or it doesn't. The way to deal with the SSM is to do exactly what we ought to be working at as a community already: rejecting what's false and discovering what's true. Anti-abuse measures won't work forever, but we don't need them to—we only need them to last long enough to develop the right habits as a community. If we can reach a sort of (dare I say it) herd immunity from the viruses of manipulation, we'll be fine. The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community. That's what the site guidelines and moderation here are trying to nurture.
Edit: I should add that I'm not 100% confident that this can work. But it's clear that it's the best we can do in that scenario, and the good part is that it's what we ought to be doing anyway.
One possible way to address this is to make visible a list of users who flagged a post. The arguments against this are obvious. But without such information, in the end you have to accept one result of anonymous moderation is the generation of conspiracy theories.
It's a tradeoff, of course.
In most cases it is the politics aspect or the unfair coverage aspect that leads users to flag a story, like say on lab leaks; but this story being flagged so easily was interesting. It is about a tech platform intentionally/mistakenly censoring things we will count as free speech.
What's particularly insidious is that killed stories both don't show up in Algolia search results (this is somewhat understandable, but in the case of political flagging, problematic), and even where favourited (something I also do with some regularity), may not be visible to non-logged-in users and IIRC actually disappear from the index in time.
As far as organic or not, it doesn’t really matter. People need to have an immune system for nonsense, especially it feels right. Most people can spot nonsense that goes against their own worldview. The trick is to be able to spot nonsense that is aligned with your worldview or you could directly benefit from if true.
Without user flagging HN will be unusable.
> well, the reason that's not done yet is because moderation takes 90% of my time, answering emails takes the other 90% of my time, and counteracting abuse takes the other 90% of my time.
So much this. There just isn't enough time with a small staff.
> Most haven't figured out that the above-board way to do that is simply to make interesting contributions
So much this too. This is what we always told people on reddit -- brands would ask us "how do I get more popular on reddit" and we tell them, "make interesting content".
> Almost everything that people think they're seeing about this is pure imagination and projection, entirely determined by the strong feelings that dominate high politics.
Same with all social media. People assume governments have heavy handed control of all content on social media, when in most cases the government couldn't care less. They focus on using propaganda to control individuals and then let those people make a mess of social media.
Your whole post resonates with my experience on the inside of moderating a big social media site and meeting with moderators of other big sites.
I'll be honest, at first I wasn't too keen on you moderation style, as I found it too heavy handed. But I take that back. HN doesn't cover everything I want to talk about (I go to reddit for the rest), but what it does cover, it covers better than reddit does.
So thank you, and I hope you get some more help with one of those 90% jobs!
You explained to me (and HN) recently that posts which are critical of HN itself are moderated less, not more, than others.
The same standard should apply to posts in which a greatly disadvantaged group are standing up to a vastly superior power, all else being equal (credibility, tactics, nature of grievance, etc.)
As with content concerning the events of June 4, 1989, in Tiananmen Square, China.
Dupes should be merged. Valid freestanding posts should be unflagged.
I apologize if the questions of myself and other users on this site today has set you on edge - and I am sure that today, in public and in private, you have seen many ugly things that the majority of us do not, and you reasonably draw a trend-line. I believe that you should extend the same charity of trend-spotting in the other direction.
We live in tumultuous times, and the speed at which the ratchet is moving seems to be ever-increasing. There are significant concerns, as I know you know, about censorship abroad, and also at home in various western countries. I believe the overwhelming outpouring you have seen today has been in response to one undeniable fact -- that even a genuine accident on the part of some engineer somewhere could apply CCP (or any country's ruling party) censorship globally is a line in the sand that many did not realize had been already been crossed.
Whether accidental or intentional, this is a watershed moment in the debate over censorship and freedom. It seems likely there are many more such errors in configuration actively deployed right now. That we have no way of knowing what, or how many such incidents there are is an existential threat to non-authoritarian systems of governance across the globe.
To see something that seemed unthinkable even a few months ago - that Tank Man could be censored in western countries on the anniversary in remembrance of the struggles he literally stood for - crossed a threshold for me in terms of what I believed could be possible more broadly. To see the extremely reasonable discussion around it disappear from hacker news, and stay dead for hours (I note that both the inappropriately-flagged article and the accidentally-marked-as-dupe article both still maintain those statuses at the time of this writing [EDIT: The flagged article's status was changed a few minutes after. Thank you, dang. Doing so does not mean you are re-writing history, and we appreciate it]) made it feel like it had encroached even closer to home than I had suspected.
It made it feel like perhaps I'd been even more naive than I had ever imagined. I'm sure you must feel the same way, after some of the more hateful things I'm sure you heard today.
All of this is to say that I treasure the community that you have played the single largest role in shaping, and your explanations have completely satisfied me.
I apologize for the way your day turned out, and any negative ways in which I have contributed towards that.
> Second, if you're going to draw dramatic conclusions about sinister operations
This isn't about drawing dramatic conclusions. I have no delusion that Hacker News is colluding with the CCP. This is simply a question about a trend of disappearing posts.
My original statement about
> growing reach of control in American discourse
is purposefully broad because the mechanisms of control are broad themselves. There is plenty of valid concerns around different types of cyber warfare or the growing self-censorship and desire among individuals to avoid challenging topics related to China. Hacker News is a collection of individuals and doesn't need to be a part of grand conspiracy to be susceptible to pressures that have exerted control over other media organizations.
Explaining the process of hacker news moderation and how you mitigate real threats to free speech would be a better approach than claiming your critics are sensationalizing.
To be clear I fall on the side of HN generally handling things well, my post was squarely at your dismissive response to valid criticism.
I do think it's entirely possible the Asian hate fears are the sort of alarmist panic that the American media loves to trade in. I'd like to see statistics regarding violent crime reported by Asian and Pacific Islanders, rather than mostly anecdotal reporting or the dozen or so high profile attacks. I don't see this sort of breathless but shallow reporting as a conspiracy but just run of the mill bandwagoning.
Maybe I was misreading it, but to me at the time it seemed like a flood of unreasonably positive people gushing about something they couldn't really have had any experience with.
Ah, but this is just proof, that the communist sleeper agents are entrenched even deeper among us, than we expected!
Unfortunately, it seems that we all do this. It's just easier to notice when other people are doing it!
Edit: I got to it! See the lower portion of https://news.ycombinator.com/item?id=27398725, after the "---".
If you or anyone notices something wrong with the argument, I'd like to hear what it is.
Plenty of orgs are surely trying to do that actively for all sorts of reasons. No idea how successful they are, probably tough to tell.
The spookiest thing of all is that most of the effect might be genuine grassroots action. Picture a Chinese Nationalist poster here, genuinely independent tech enthusiast and happens to know enough English to participate in an English forum. Perhaps they are genuinely annoyed by what they see as westerners meddling in their internal politics, which there is a long history of. Perhaps they flag what they see as clickbaity stories likely to lead to a bunch of China-bashing out of genuine annoyance. They don't need to be paid or leaned on by the CCP at all, they just actually feel that way.
Dammit, now I sound too apologetic about it. Sigh...
The pattern seems clear that these users are flagging the more sensational kinds of submissions that tend to lead to predictable discussions and flamewars. There's room for competing opinions about which of those are/aren't on-topic for HN, given the site guidelines; if you or anyone want to understand how the mods look at it, I recommend the explanations at the links below. But clearly the flagging behavior in this case was in good faith.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
We have banned people in a few cases for serious $BigCo astroturfing but there's always a grey area in the Venn diagram around "PR operation" and "overzealous fan". You can't tell those apart without a smoking gun and those are hard to come by. Fortunately, from a moderation point of view it's a distinction without a difference because the effects on the site are the same.
Also FWIW, my sense (and we do have circumstantial evidence for this) is that even when these things are PR, they're somehow haywire (e.g. a contractor gone rogue), not official strategy, and if high-enough execs found out about it they'd probably shut it down. That's just speculation though; informed speculation, but not highly informed.
You guys need to realize that you have a trump card (can I say that? or too soon?) that users of other platforms don't have: direct access to the people running the platform, who are willing to answer any question about it.
Btw I'm not necessarily agreeing that those were bad flags; but in cases like this, the community has the final say.
This problem happens again and again with hot topics, and at the moment the default behavior is to let them disappear, which is a bit lame.
There was an interesting report in German TV, where they analyzed a paper looking for bot patterns in Twitter. That paper named some offending accounts, so what they did was PM one - and it turned out that it simply belonged to a pensioner with strong political opinions and a lot of free time. Interesting to look behind the cover some times (through I do think that TLAs realize this power and don't let that slide, to some extent at least).
> I'll be honest, at first I wasn't too keen on you moderation style, as I found it too heavy handed.
It's interesting how viewpoints diverge - for quite some time when I started reading, I actually did not realize that HN was moderated. If I may ask, where did you encounter so much heavy moderation?
I do not want to single out a single company, but would like to use this particular example to ask you the following: Please keep in mind the level of manpower and persistence, some of these corporations can call upon for their strategic objectives..
In 2020 Microsoft had, apparently, 106 lobbyist companies working on its behalf: https://www.opensecrets.org/federal-lobbying/clients/lobbyis...
and 94 in 2021 https://www.opensecrets.org/federal-lobbying/clients/lobbyis...
Looking at the website of some of these companies, offered services include and quoting: "Third party influencer outreach" :-)
I think social media (sorry for calling this site that) vote manipulation detection will be one of the defining problems of the decade.
A couple places. The one that bothered me most was that titles would get changed without asking or notification to the poster. Sometimes they would get changed to something I didn't think made sense, and then I looked like I had done that, since there was no indication that it was changed. I guess I'm still not a huge fan when it happens to me, but I see why it happens.
I also didn't like having my comments detached or cooled. If you reply to a top level comment with a good comment that happens to generate a flame war under you, it will get detached from the top into it's own thread, and that just felt weird because it made it look like I made a non-sequiter top comment and also stifled discussion (which was the goal of course).
Also if you make a comment that gets a ton of votes but is perceived as off-topic, they will put a flag on your comment that makes it fall in the rankings. So based on the points and time it should be up at the top, but instead will be near the bottom, sometimes under the comments with negative scores.
Lastly, I have dead comments turned on, and I would see dead comments that I didn't think deserved to be dead. Eventually I got enough karma that I could vouch, which helped.
Those were my main moderation complaints. I still don't particularly like when it happens to me, but usually when I see it happen to other people I think, "yeah that makes sense".
Or you could do what I did: something unknown that resulted in years of being limited to about 4 replies a day.
Or the latest minor symptom of a chronic illness.
This wouldn’t have happened, accidentally or not, if it wasn’t for the continuing and constant bullying of the Chinese government, and the willingness of international actors to kowtow to it.
(I've also done that by mistake like four times.)
Spoil the fun...fun at some one else's expense... Your expense, sigh. Spoil away
Interesting. I blame the aliens
The CCP has zero say in how HN operates.
For the most part the users of HN are in control of whats displayed, its probably one of the most censorship free sites on the planet. You should see how often my karma fluctuates because I express an unpopular opinion, it doesn't bother me because I know its about people and not an algorithm.
This one is interesting to me, because I have emailed the moderators to do exactly this for highly upvoted comments I feel take the discussion into what I feel are the wrong places. I can understand that for a new commenter such tangents might be novel, but for someone who’s been around here for a while I am curious if you oppose such actions for the nth time that someone drags “here’s my article about new C++ feature” into “honestly C++ just keeps adding too many things, discuss”.
Why even allow flagging to influence a submission’s ranking without mod intervention in the first place? Spammy links won’t reach the frontpage anyway (although they sometimes stay on Show HN for a while, so can make special exceptions for Show/Ask submissions).
And how about if a user flag a post, then you might consider making the post completely inaccessible to said user (and if they posted a comment in the submission then set their flag weight to 0 for said submission)? After all, good actors will have no interest in engaging in the submissions they flagged, whereas bad actors will want to attack the submission from all angles (flag submission, downvote comments, post own comments).
That said, I normally chalk it up to the sites topic's and interest being a little more diverged from my own, Which is perfectly fine as I typically enjoy the moderated approach over the constant outrage and flame fests i see elsewhere.
So you have a long term otherwise great user who contributes positively who has a strong opinion on a political issue that you don't want all over your forum for or against. People tend to flag it for you but its not controversial enough to fall in a hail of flags. Keeping it flagged fortunately takes a relatively small effort saving mod effort.
You introduce vouch for flagged stories. Now your user vouches for absolutely EVERYTHING on his side of an issue and his opposite number vouches for everything on the other side.
Content that isn't low quality and resonates with a good number of people is likely to attract votes even if its off topic and ultimately not desired on that forum and direct and constant mod effort is now required to keep it off because super upvotes now counter super down votes. Welcome to your new political forum.
Any time vouching triggers extra attention, the decision is recorded in the database. If someone routinely vouches and gets overruled (i.e. vouching for bad content), then their vouching no longer counts in the future.
At some point after the system is introduced, start giving extra weight to people whose vouching decisions line up with moderators.
Worst case, this is just flagging stuff for extra moderation attention so there's not a lot to abuse. If it's requiring too much extra attention, adjust the required vouch:flagged ratio or raise the threshold needed to vouch "For Real".
(I'm not saying I see anything wrong with the current system - I tend to appreciate how well this particular Walled Garden is tended to. But the vouch idea seemed cool to me, and I felt like I could contribute a useful implementation)
In a hilarious twist of fate, searching for that term brings up papers on either medical research or peer-review reliability problems in general[0]. You try to find data on a potentially abstract, complex societal issue, and come up with what can only be described as attention grabbing HN-catnip.
0: https://www.cambridge.org/core/journals/behavioral-and-brain...
Want to censor a thread on HN? Flag it with a few different users, or turn the thread into a shitshow so that the "flamewar" tools will be triggered, or moderators will be forced push the thread off the frontpage.
https://upvotetracker.com/post/hn/27395635
Related Show HN by @janmo: https://news.ycombinator.com/item?id=27092770
Edit: I'm not sure I'm a big fan of his current sponsoring link however... maybe worth a look too.
> What does [flagged] mean?
> It means that users flagged a post as breaking the guidelines or otherwise not belonging on Hacker News.
Edit: oof, that link does look awful doesn't it. Most "how to get on HN's front page" content is terrible, it doesn't work and induces people to post dross and pull tricks that just degrade the site. I've got a set of notes about how to write for HN that I want to publish someday. If anyone wants a copy they can email hn@ycombinator.com.
I can't comment on this particular slick content marketing course because apparently you have to buy it to find out what it says, but previous ones I've seen have been entirely unreliable, and the look and feel of the ad certainly seems antithetical to the spirit here.
Tao Te Ching, Ch. 17,
With the best kind of rulers
When the work is complete
The people all say
"We did it ourselves."
(Kinda totally destroys e.g. Machiavelli et. al., eh? And it's Chinese, huh, FWIW, and old.)In re: Option 2:
https://xkcd.com/810/ "Constructive"
> [[A man is talking to a woman]] Man: Spammers are breaking traditional captchas with AI, so I've built a new system. It asks users to rate a slate of comments as "Constructive" or "Not constructive". [[Close up of man]] Man: Then it has them reply with comments of their own, which are later rated by other users. [[Woman standing next to man again]] Woman: But what will you do when spammers train their bots to make automated constructive and helpful comments? [[Close up of man again]] Man: Mission. Fucking. Accomplished. {{Title text: And what about all the people who won't be able to join the community because they're terrible at making helpful and constructive co-- ... oh.}}
Cheers dang.
> Why not automatically punish users that abuse flagging to censor stories
The problem is with the words "abuse" and "censor". No one can agree on what they mean because it depends on what you think of the underlying story, and when it comes to divisive topics, people have strongly differing views on that.
When the topics aren't so divisive (e.g. Conway's Game of Life on the GPU, or something like that), this is not such a thorny problem. But those aren't the cases that we're talking about in this thread.
How easy it would be for bad actors brigading with freshly created accounts, or not-so-freshly created accounts with a history of brigading, to abuse this feature to censor stories?
I can attest to this: at one of my old companies a post related to us ended up getting removed, just because so many of our engineers (entirely independently of the company) voted or commented on it. After that there was a very strict instruction from the company _not_ to engage with any posts about us...
infinite troll accounts would ideally be defeated by all the moderation being exposed and distributed. and infinite pseudonyms account would make toe the line efforts too expensive.