Within the next 10 years, and maybe much sooner, the vast majority of content on FB/Twitter/Reddit/LinkedIn will be completely fake. The "people" on those networks will be fake as well. Sure there are bots today, but they're not nearly as good as what I'm talking about, and they don't exist at the same scale. Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.
IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.
A western reporter travelled to the other side of the iron curtain once and was doing what he thought would be an easy west-is-great gotcha-style interview. He asked someone over there, "How do you even know what's going on in your country if your media is so tightly controlled?" Think Chernobyl-levels of tight-lipped ministry-of-information-approved newspapers.
The easterner replied, "Oh, we're better informed than you guys. You see, the difference is we know what we're reading is all propaganda, so we try to piece together the truth from all the sources and from what isn't said. You in the west don't realize you're reading propaganda."
I've been thinking about this more and more the last few years seeing how media bubbles have polarized, fragmented, and destabilized everyone and everything. God help us when cheap ubiquitous deepfakes industrialize the dissemination of perfectly-tailored engineered narratives.
Will they? People interact with these things because they are giving the brain what it wants, not what it might need. How many people would flock to a verified minimal bias news site? How many people would embrace so many hard truths and throw off their comforting lies? How many people could even admit to themselves they were being lied to and had formed their identity around those lies?
Do people want authentic now? The evidence says no.
If I can interact with bots that emulate humans with such a degree of realism, what do I care? You could be a bot, the whole of HN can be bots, I don't really care who wrote the text if I can get something from it, I mean I don't have any idea who you are and don't even read usernames when reading posts here on HN.
At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).
Universal cynicism and nihilism may function that way. But that was not the attitude of the person in the description. So I am not sure how that is relevant?
I'm skeptical that this can be done effectively
The level of control/conformity on canonical Western media was such that, for most topics of daily news, thinking about the bias of the reporter was not a first-order concern.
For some topics (let's say, hot-button US-vs-USSR things, or race issues in the US), the bias of the source was of course important, anywhere.
But for, say, reporting inflation, unemployment, or the wheat harvest, whether NBC news or the Washington Post was biased wasn't critical in the same way it would have been in the USSR.
Basically, my argument is that the difference in degree is still a worthwhile difference.
On the flip side, successful startups that aren't full social but do require some authenticity verification have already been proven: nextdoor and blind, for example
I think the biggest issue is scaling to a facebook-style, reddit-style, or twitter-style "full-world" social network implies colliding people who have no other relationship or interaction but are linked through a topic or shared interest
And, in my opinion, when you hit a certain level of scale, the verification almost becomes pointless: there's enough loud angry and troll people out there that I dont think it matters if they're verified or not. You can't moderate away toxicity in discussions that include literally a million participants.
I think you need both verification and some way to keep all the users' subnetworks small enough that it isn't toxic or chilling. But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem
I think you have completely misread the situation. The "fakification" of social media is already happening. Much if not most engagement is already driven by bots or by fabricated "influencers" and more people are using these platforms more often, not less.
I’ve been wondering whether teachers who grew up on the other side of the curtain put a similar emphasis on the topic of propaganda, especially after social media uncovered lots of gullibility in the general public and a for me very difficult-to-understand trust in anything as long as it is written down somewhere, often not even looking at the source. Political effects of eastern german brain drain aside, one important difference between people in the former western and eastern parts of Germany up until today is how much they trust media and institutions like the church.
I think the critical threshold for most people will be when bots start impersonating people they know in person. At that point, the value of the social networks will evaporate.
That said, there are clearly some social networks where you absolutely want to verify authenticity. Take for example, dating websites. Fake profiles _TODAY_ are a huge problem for those sites. If you have too many fake profiles, then paying users just log off and never come back. Same for LinkedIn. How many recruiters are going to pay for access to that network if 30% of the profiles are fake?
It seems like crowd-sourced moderation is probably the only thing that will work at scale. I've always wondered why Reddit doesn't rank comments by default according to someone's overall reputation inside of a subreddit and then by the relative merits of the comment on a particular subject. Getting the weighting right would be hard, but it seems like that would be the best way to dissuade low quality comments and outright trolling.
The cliche "if you're not paying for it, you're the product" is just the tech nerd's version of "if you don't know who the fish at the table is, you're the fish."
Folks behind the iron curtain got used to that mentality over a few decades in a time when information flowed slowly through newspapers, radio, and early TV... we're now being forced to reckon with these tricks over the course of a few years while moving at the speed of industrialized data collection, microtargeting, and engineered dopamine bursts that maximize engagement.
People living in the cold war era were at least mentally inoculated against these tricks -- in the US we've had no preparation for it. The ease with which we've turned against each other for the easy popcorn comfort of the conspiracy theory or outrage du jour is mind boggling.
The real difference is that those in the east were predisposed to be suspicious, whereas in the west that disposition or curiousity is not a thing.
can add levels.fyi to that list as they now use actual offer letters to build their data set
I don't share your optimism. Significant portions of the population believe the Earth is 6000 years old or is flat. Not sure why their critical thinking skills would suddenly improve at an opportune time.
The US government does authentication in real life via social security numbers. Of course, they are not very secure: a government-operated SSO or auth API for third-party applications would be a logical next step.
It would guarantee uniqueness and authenticity of users. Even better, if this were an inter-governmental program, it would deter government meddling: a state issuing too many tokens for fake accounts would arouse suspicion.
It doesn’t seem like people there are obviously better at media consumption, let alone inoculated?
Ironically accounts with Twitter's blue check mark are often the accounts most likely to be managed by a social media manager.
A relevant, if flip solution to the 'bot' issue[0].
I wish those skills were teachable without recreating the full environment...
In my opinion HN is the gold-standard of online communities and it's being managed pretty well despite it scaling to what it is right now.
I wonder more leanings from HN (specially on the moderation front) can be applied to newer social platforms.
Any kind of widely used identity/authentication system would need to be a protocol and not a product of a for-profit corporation. Businesses take on great risks if they use another corporation's products as part of their core operations as that product owner can change the terms of service at any time and pull the rug out from under them. A protocol is necessarily neutral so everyone can use it without risk in the same way they use HTTP.
For identity protocols I think BrightID (https://www.brightid.org/) is becoming more established and works pretty well.
Of course, this also assists in Social Cooling, since controversial statements act a lot like totally false ones in the public eye.
Presiding over steadily improving living standards tends to give leaders staying power in every country. Putin was there for Russia's bounceback from the 90s.
... Which is a good thing. (for the users, at least)
As to reporting unemployment: https://news.ycombinator.com/item?id=24364947
"If you agree with it, it's truth. If you don't agree, it's propaganda. Pretend that it is all propaganda. See what happens on your analysis reports."
Mad magazine used to run "reading between the lines" pieces.
[1] A while ago I learned The Game of Rat and Dragon is accurate insofar as felines not only have better reflexes than ours, they're among the best.
Now that's authenticity verification.
You know, where they have those opinion pieces always with the same 6 photos (but a different name & occupation) each spouting something humorous?
and curiously there is some truth at the hidden within each onion article.
We only got this problem with users trying to do house cleaning. Most communities are completely fine without authentication, so it certainly isn't necessary.
Not so sure. I'd rather wage that people won't really care about whether they interact with real humans or not. Why would it matter? It's not rare for people to relate and feel emotions for virtual characters in video games - even though they are perfectly aware it's all fake! The same can be said for movies, TV shows. You know it's fake, yet you watch and enjoy. I'm not sure why it would be ANY different for social networks which are basically just another form of entertainment.
But it's in Pepsi's and Coke's best interest to have you think it's only those two.