The obvious problem is we don’t have any great alternatives. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”).
I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. Hopefully I’m just bad at searching. I truly don’t get why people aren’t looking at these issues seriously and systematically.
In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now.
I can see lots of reaosns people might oppose the idea but I am not sure why it's not a widely discussed option?
(asking honestly and openly - please don't shout!)
Of course we do. The rise of digital finance services has led to creation of a number of servives that offer identity verification necessary for KYC. All such services offer APIs, so adding an identity verification requirement to your forum is trivial.
Of course, if it isn't obvious, I'm only half joking.
There's always identity based network of trust. Several other members vouch for new people to be included.
I am very aware of "designing a security system they themselves cannot break" and the difficulties of key management etc.
Would be interested in knowing more from smarter people
(probably need to build a poc - one day :-( )
If someone walks upto me in the voting booth and says "vote for X or I will kill you" that's a crime. If they do it in the pub it's probably a crime. If they do it online the police don't have enough manpower to deal with the situation.
We should change that.
Every time some fuckwit tweets "you and your kids are going to get raped to death and I know where you live" because some woman dares suggest some political chnage I would like to see jail time.
And if we do that then I can understand your argument, but I would then say it is not valid - in a society that protects free speech.
We have the penetration
(Afaik smartphone penetration is around 4.5-5 BN, and something like 50%+ have secure enclaves but honestly Indont follow that deeply so would defer to more knowledgeable people)
Much more likely is that I'll vote ignorantly because I lack information that someone withheld because they're intimidated by the authorities.
There isn’t a widely deployed public key network with keys that represent a person, afaik. PGP is the closest maybe?
I think there are cases for anonymous/pseudonymous speech, but I think that's going to have to shift away from disposable identities. Newspapers, for example, have been providing selective anonymity for hundreds of years, so I think there's a model to follow: trusted people/organizations who validate the quality of a non-public identity.
So a place like HN, for example, could promise that each pseudonymous account is connected to a unique human via some sort of government ID with challenge/response capability. Or you could end up with third-party ID providers that provide a similar service that goes beyond mere identity, like the Twitter Verified program scaled up.
Disposable identities have always been a struggle. E.g., look at Reddit's very popular Am I the Asshole, where people widely believe a lot of the content is creative writing exercises. But keeping up a fake identity over the long term was a lot of work. Not anymore, though!
All of those notions are pre-internet ways of proving identity. In a world where we're all rarely more than an arm's length from a globally connected computer, they're on the way out.
If someone down the line does some BS activity, the accounts that vouched for it have their reputation on the line.
A whole tree of the person who did the BS and 1-2 layers of vouching above gets put on check, gets big red warning label in their UI presence (e.g. under their avatar/name), and loses privileges. It could even just get immediately deleted.
And since I said "identity based", you would need to provide to real world id to get in, on top of others vouching for you. It can be made so you wouldn't be able to get a fake account any easier than you can get a fake passport.
They don't own a key pair. They carry one around, which is owned by google or some other entity?
If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person.
Yes and yes.
>If the former, it looks quite impractical unless there are widely trusted bulk verifiers. E.g., state DMVs.
It's happened already in some cases, e.g.: https://en.wikipedia.org/wiki/Real-name_system
>If the latter, then it all looks quite prone to corruption once bots become as convincing correspondents as the median person
How about a requirement to personally know the other person in what hackers in the past called "meatspace"?
Just brainstorming here, but for a cohesive forum, even of tens of thousands of people, it shouldn't be that difficult to achieve.
For something Facebook / Tweeter scale it would take "bulk verifiers" that are trusted, and where you need to register in person.
And this is important because a "fair democratic society" that doesn't need people to be able to protest is, as history has shown many times, only a temporary affair. The best way to keep it is to not give the government the tools a worse government could use to suppress dissent.