Very soon, the domain of bullshit will extend to actual text. We'll be able to buy HN comments by the thousand -- expertly wordsmithed, lucid AI comments -- and you can get them to say "this GitHub repo is the best", or "this startup is the real deal". Won't that be fun?
The obvious problem is we don’t have any great alternatives. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”).
I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. Hopefully I’m just bad at searching. I truly don’t get why people aren’t looking at these issues seriously and systematically.
In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now.
There's always identity based network of trust. Several other members vouch for new people to be included.
If someone down the line does some BS activity, the accounts that vouched for it have their reputation on the line.
A whole tree of the person who did the BS and 1-2 layers of vouching above gets put on check, gets big red warning label in their UI presence (e.g. under their avatar/name), and loses privileges. It could even just get immediately deleted.
And since I said "identity based", you would need to provide to real world id to get in, on top of others vouching for you. It can be made so you wouldn't be able to get a fake account any easier than you can get a fake passport.
If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person.
Yes and yes.
>If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
It's happened already in some cases, e.g.: https://en.wikipedia.org/wiki/Real-name_system
>If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person
How about a requirement to personally know the other person in what hackers in the past called "meatspace"?
Just brainstorming here, but for a cohesive forum, even of tens of thousands of people, it shouldn't be that difficult to achieve.
For something Facebook / Tweeter scale it would take "bulk verifiers" that are trusted, and where you need to register in person.