When you think privacy of in in the terms of 'social cooling', or consider things like China's 'social credit' system, I can't help be think we are much closer to the world depicted in the last season of Westworld than we might want to admit.
Sometimes I think that authors who see patterns and make reasonable but dire predictions about where society is going actually end up providing a game plan to career oppressors.
So of course we have something to hide.
My dad is one of those old school guys who thinks law enforcement can do no wrong and nobody needs to hide anything unless they're doing something wrong. Even if that were true and I think it is true that many law enforcement personnel are trying to do good, that doesn't always mean the results will always reflect their intentions. When the sample size of facts is too small, as is often the case with mass collection, it's too easy for your sample to get mixed up with someone else's. Maybe your phone is the only other phone in the area when a murder is committed. That doesn't mean you did it, but it sure makes you look like the only suspect.
I was never able to gain an inch on his argument until I asked him why he has curtains on his living room window. I mean, it faces North, so there's no need to block intense sunlight, yet he closes them every night when he's sitting there reading a book or watching TV. Why? He's not doing anything illegal, yet he still doesn't want people watching him. He said he would not be ok with the Police standing at his window all night watching him. That's when he finally understood that digital privacy is not just for criminals, but for everyone who wants to exist in a peaceful state and not a police state.
I'm not doing anything wrong, but I still close the door when I take a dump. The idea that someone wanting privacy means it is nefarious or wrong is ridiculous.
You may need a good life on display rather than just an absence of bad things.
And misleading. Privacy in private interactions (personal or closed groups) is basic human right. But in public interactions (public space or open groups) the concept of privacy is much more problematic. One can argue for less accountability for social progress, another for more accountability to weed-out bad actors.
Seems to me that using word 'privacy' for both of these different concepts is source of confusion. Perhaps we should limit term 'privacy' for private interactions and use some other (like 'non-accountability') for public ones.
To answer your question, people aren't always seen as intrinsically valuable, nor their suffering meaningful. In the wrong context, corporations, congregations, and other populations are only valued for what they produce, like how cows are valued (and raised) for their milk and meat.
Cattle are products on a farm. They have purposes. A few bulls are left for breeding, the rest are gelded. Some cows are for milk. Others are fattened up as much as possible.
But all end up in the slaughterhouse. Anyone that steps out of line causes problems before that time may find themselves culled from the herd.
The purpose of the system is not to make cows happy, or meet cow needs. It's to produce as much economic product as possible.
(... not saying dumps are advanced noise, but this is on the right track. Don't hide the needle. Produce more haystack)
[1] https://www.ftc.gov/system/files/documents/reports/reduced-d...
In our culture we feel deep embarrassment if someone sees us using the toilet, but this is not universal across people and cultures, and honestly, it shouldn't be embarrassing. There's nothing inherently wrong with pooping. We irrationally feel embarrassment when we shouldn't have to.
This argument doesn't show any negative consequences of invasion of privacy. It's also not clear how it extrapolates to situations that don't involve toilets or nudity. If the problem is embarrassment, and people don't feel embarrassed that Facebook collects data, does that make it okay?
Obviously there are other arguments for privacy that do show potential harm. I find these more compelling.
My local Facebook group seethes with an angry discussion just below threats of actual violence - and the actual violence was on display only a short time ago when Back The Blue physically assaulted a black lives matter demonstration (in a smallish city where "BLM" is just earnest liberals as you'd expect). And the miscreants were readily identifiable by Facebook (which hurt their business if nothing else but still basically weren't all that bothered by the situation).
Another thing about the heated local-group arguments is that few people have a good idea how unprivate their situation really is. The paranoia of Bill Gates "microchipping" people is a cartoonish example but there's a vast group people very concerned with privacy but having close to no understanding of what it actually involves (or how much they don't have).
If anything, the noxious effect of massive collection is most evidenced by micro-marketing of a variety of crazed ideas to those most susceptible to them - and employers and landlords being able to harass their own employees for particular things they object to (but lets a lot of things through, and business owners have less to worry about).
I'm not arguing, I'm just not sure what you mean.
In contrast, if you have full information, you can construct pricing schemes that fully extract all surplus from the consumer. You can, in essence, get higher prices without losing customers. Many pricing schemes today are trying to use more information to approximate that situation (for example auctions, anything with subscriptions, fixed components, packages etc.). It is why firms like Amazon and Google hire a lot of Economics PhDs and Game Theorists. You will also notice that many products are pushing toward such pricing models. This is not by accident.
So, your contention is half right and half wrong. In the greater scheme of things, full information is often (but not always) efficient for total welfare. However, in such situation total welfare also may accrue entirely to firms. That means higher profits, first, and higher costs for the consumer second.
In effect, you will pay more if you are more known.
It then depends on your faith in the fairness of the ownership and distributional properties of our capitalist systems, as well as the efficiency of the markets in question (e.g. competition), whether the increased profits are eventually redistributed to you, the consumer.
It seems to me that in many of the markets in question, even the description of oligopoly would be rather charitable. In that case, latter parts of your post do not seem likely.
Edit: Since you asked for the principles. The first iteration of this you may come across is called price discrimination. At that stage, it's not about information, but you can make that link in your head quite easily: The ability to set different prices, depends of course crucially on what you know.
Next, you may hear about auctions or contract theory, where such problems are tackled explicitly. Switching the roles, you may hear about principal agent problems, where a similar (really the same thing) occurs. For full generality, you may want to read into Mechanism Design. Tillman Borgers has a great book which used to be available free as PDF and you can probably still find it. If you are interested in questions such as: "What can we say generally about any sort of sales contract", then this is a good place to start. Needs some math though.
So instead of an ad blocker, we could have background bots in our browser visiting random urls and clicking on every ad in sight (of course it would need to mimic human UI input).
I wonder what affect that would have.
It sounds more respectable if you call it an 'intuition pump'. Whether or not it is rational to want to defecate privately, this point may lead some fraction of those whose mind was previously made up to reconsider their position. In those cases, it can be the beginning of a conversation.
But I don't think it's the strong argument in favour of privacy that we want to make, because:
1. We do give people privacy in the bathroom. The debate is over the data social media companies collect. If people aren't generally embarrassed that Facebook collects data about what they post on Facebook, how does it relate to being embarrassed to be seen on the toilet?
2. Do we always have to accommodate irrational feelings? What about people who are easily offended by things that things that most would consider non-offensive? Is it immoral for a child to dress as a clown on halloween given that some people have coulrophobia? If you're arguing with someone who believes law enforcement should have access to people's social media and you bring up that stuff posted on social media could be embarrassing, the obvious response is, "Well, too bad. Investigating crimes is more important."
But that is precisely the rational reason. In a free society you want people to act freely. To be able to act freely it helps tremendously to not be under constant surveillance by authorities, powerful actors and/or personal and political enemies. If one happens to have the same cultural background or political ideas as all those on the other side and one is generally a careless nature it helps in not feeling threatened by that surveillance.
The new thing digital surveillance brought is the ability to automate and for search things that happened once. Where in communist Germany the state had to have a giant apparatus that would break into your flat and install microphones, have people constantly following you around and listening in on every word you said. The impact this has on a free exchange of ideas is quite obvious, isn't it? These things have become far less resource intensive in the age of the web.
And if you now say: "Yeah but they were communists" — that is the point. If you are hoping those in power will be respectful because your values (currently) align with theirs; or because your information is (currently) more useful to them when not disclosed to your enemies — then this is a very optimistic view of the world. But things can change, and not all have that sense of optimism.
Not having to think about whether somebody will knock your door with state police in a decade because of something you wrote online is the reason why privacy exist. Not having to censor yourself because you are afraid those fringe lunatics on the opposite political side will destroy your life is the reason why privacy exists. Not having to censor yourself because your violent husband reads everything you wrote is the reason why privacy exists.
So maybe you can read this as: Power that sees what you do can (and does) change how you act, even if they don't come after you. Not having them see you is a good way of not having to change.
> But that is precisely the rational reason.
I'm not following your reasoning here. You list several logical reasons why digital privacy is important (it protects us from nefarious governments, it protects us from violent spouses, etc.). What does this have to do with an irrational embarrassment over pooping?
Whether this fear is rational doesn't matter. Whether these intrusions are never actually carried out and always only remain a faint possibility, a story the actors make you believe doesn't matter.
John Oliver used a similar tactic when speaking about Edward Snowden and the Patrioct Act. Instead of framing it about rights, pricacy and stuff, he talkes about dick picks. It kinda worked? https://www.youtube.com/watch?v=XEVlyP4_11M
When an employer, for instance, is able to request data aggregation services for a break down about your entire life without or with forced consent from you, or able to monitor and analyze every step of yours during working hours, it's dehumanizing.
Similarly, it doesn't matter whether those with access to data regarding you have only good intentions. It may be pleasing to have a store know everything you like and need right in the moment, you still should be able to walk in and out (pseudo-)anonymously when you wish to.
Same with the state. We say not to talk to the police. In trials the determination what evidence can be submitted is always an important step. So why should the police, prosecution, intelligence agencies, or any other entity be able to access or collect data about you and evaluate it without due process?
1) social cooling is a long-term, slow-burn, bring-pot-to-boil-so-slowly-the-frogs-don't-notice problem. Pointing out some social heat to discredit it is analogous to people discrediting global warming because they've experienced an unseasonable cold snap in their town.
2) By your own description, there are knowledge gaps inside the "social fire" crowd - they don't understand (potential, future) consequences like housing discrimination, work prospects, etc. I don't think it will take more than one generation for these realities to become common knowledge.
3) Finally, people who consider themselves hopelessly marginalized will be susceptible to 'social fire'. People who don't have anything to lose are prone to this (eg, what factors go into someone's decision to get on board with looting?). More solidly situated members of the public, with reputations (salaries, ongoing business concerns, etc) at stake, are likely to be more careful.
My dad has a Ph.D in Econ from an Ivy League institution, and lives near-ish to a few FAANGs. He's retired but gets headhunter emails from them consistently.
I don't see what's so problematic. If someone is in public, they are exposed and obviously don't have any privacy. Same logic applies to data people publish on the internet. People can attempt to create some privacy for themselves in these contexts but it's not really a violation or invasion if some stranger shows up and witnesses things they weren't supposed to.
It's completely different from someone's house or computer. These are our spaces and we have complete control over them. So someone installing sensors such as microphones and cameras inside our own homes is a massive violation of our rights. Everybody understands this. It's offensive when the state does it even when warranted. So it is also not acceptable for mere corporations to turn on our microphones in order to listen to keywords or some other surveillance capitalism bullshit.
Otherwise your argument becomes "I don't understand what these things are or why people care about them, and therefore perhaps they don't matter."
And that's not a strong argument.
It's not possible to understand the parent comment without knowing what dignity is.
> Otherwise your argument becomes "I don't understand what these things are or why people care about them, and therefore perhaps they don't matter."
What argument? I literally said, "I'm not arguing, I'm just not sure what you mean." I was just asking for clarification. I haven't denied anything in the parent comment.
Could we perhaps group together embarrassment, loss of dignity and shame and summarize the point as follows?
"Invasions of privacy cause psychological harm."
Rights to wear clothes. Rights to not speak to anyone they don't want to. Rights against unreasonable search. These are all privacy related, and while we give some up to be a part of society, we retain some as well. Looking at this as black and white (on either side) is an obstacle to finding a sustainable and constructive path forward.
We were 'almost' there 20 years ago. We are firmly near Westworld (everything outside of androids).