For example, imagine someone convinced by the argument "nothing to hide nothing to fear". Would this example convince them that in fact they do have to fear something? "You might think twice about contacting or meeting people (exercising your freedom of association) who you think might become “persons of interest” to the state". I do not think so, after all, average Joe does not know such people.
The solution, in my experience when talking to sceptical people not convinced of the risks is talking about money. Imagine someone with the kind of knowledge we are talking about with mass surveillance. And imagine this person could inform your insurance companies. Do you still think that you have nothing to hide? One then must only show that data is never "safe" and could always be "leaked" to make a very simple, everyday example of why it is not in my (average Joe's) interest to be continuously monitored.
Another good example is the use of shredders at home, perhaps you should suggest people leave their sensitive data in a box on the pavement outside rather than shredding it.
I'm really still waiting to hear a convincing argument as to why I have something to hide, ideally something practical as opposed to hypothetical or philosophical.
In short, my argument goes something like this:
1) It is possible to manipulate or influence people. This extends to their memories, perceptions, emotions, actions, even complex beliefs. And it can be more or less direct. Humans think of themselves as special snowflakes, but we are actually quite simple.
2) The degree to which one can manipulate or influence other people depends on
___a) the effort and intelligence one expends on it,
___b) the degree to which one has knowledge about the target, and on
___c) how close one is to the target (i.e. are you "under their skin", in their house, or 10 km away; what are your intervention options).
3) As social animals, we have always been subject to the influence of other people. Usually, this influence has been local, fuzzy, costly, relatively obvious to the target and "controllable" (in the sense that knowledge about target was strongly negatively correlated to (generalized) distance; if target became suspicious of you, target could simply cut you off or increase distance).
4) Technology and science currently change the rules of the game, and in profound, basic ways:
___a) We learn more and more about how to influence people, both by physical means (regulating temperature, lighting or noise conditions; psychotropic substances or, you know, food; changing the color of a button or playing with the timeline of events; etc.) and psychological means (e.g. using the right words or framing to elicit a certain response or evoke a certain emotion; exploiting properties of the social graph; etc.). This knowledge is, of course, still very imperfect but it is also cumulative.
___b) More and more of our interaction with the external or social world becomes mediated by technology ⇒ the options to intervene in the life of someone multiply as technology becomes a more integral part of life. As a result, it becomes very cheap to make targeted interventions in someone's life. Example: Today, ranking of search results or filtering of news; tomorrow, entire articles machine-written for you (personally). Automated homes. The mind boggles with the possibilities of augmented reality and/or immersive experiences. Farther out: Optogenetics.
___c) Deep and very detailed information about people can be collected in real-time and stored cheaply (no memory decay). The more ingrained the tech, the more detailed the data. For example, real-time monitoring of blood sugar and, in the future, perhaps even stress hormone levels.
___d) Physical distance becomes meaningless.
5) Due to the tendency towards natural monopolies in the sector, all this information and power accrues in very few hands ⇒ strong and unprecented centralization of both fine-grained knowledge about individuals as well as the means to intervene in their world without regard to distance or cost.
It is not hard to see that, to indulge in some hyperbole, "mind control" of a large population undermines traditional means of checking power. Who cares about elections if I can control whom people like? Why bother with competitive markets if I can make people want whatever I have to give (and make them pay reservation prices)? No more need for violent suppression of dissent because I can detect and change inconvenient ideas surgically.
To be clear, I am not saying we are already living in a mind-controlled society.
What I am saying is that collecting data (or rather letting someone collect data) about us is an integral part of this scenario. If data became more compartmentalized and limited, say, this whole thing wouldn't work (or be far less effective).
In fact, because technology and science progress anyway, how we handle our data may be the only way we can influence the course of events in this respect.
At least to my mind, this is the real issue of privacy. Alas, I seem to be alone so far. It's really hard to see for me why this is not totally obvious to everybody. I should finally write that essay that I've been meaning to for the longest time. Then someone can at least attack my argument. Sometimes, in your weaker moments, that nobody seems to see what you see can make you question your own sanity...
_______________________
PS: I am also not claiming that Larry Page or Mark Zuckerberg are rubbing their hands gleefully ("hihihi") at the prospect of world domination. I think concrete persons are incidental to this scenario. Heck, the one who ends up controlling, in this scenario, might not even be human. It just doesn't matter who. If it is technologically possible, it will be done. Loss or neglect of privacy makes it possible.
http://www.nytimes.com/2015/03/08/magazine/the-loser-edit-th...
I do this frequently and while it can be a bit awkward when dealing with marketing or PR types, as long as you are polite about it things work out. And anyone pestering you with repeated requests for data or an explanation can receive a less polite response.
What you might or might not need to hide cannot be reliably determined in advance. It is not a constant, it is a variable and you don't get to pick which way it goes. Consider the plight of the gay Russian blogger using LiveJournal, which was later sold to a Russian company.
"Please strip yourself to underpants. Yes, this very moment. We (society) need to make sure you don't have any concealed weapons or explosives. I will then assure these other people that are totally unrelated that you don't pose any threat to them. What? Your privacy? Safety is more important than privacy, right? And you do have nothing to hide, so why do you fear stripping?"
No need. Orwell already wrote it.
I just don't get why you think you are alone.
I would hope I'm not but it sure feels that way.
Maybe I'm not reading the right stuff or talking to the wrong people.
Can you point me to some (contemporary) arguments along the line I propose?
And what if one company does this - what will other companies do? Will they keep the same price for that group as for everyone else? Then that group will leave for the company that's cheaper for them, right? Leaving the other companies with the higher-risk customers, right? So, they will have to pay out more for damages, right? Now, will they just go bankrupt? Or will they increase premiums to cover the costs?
Noone says that insurance companies aren't making inferences from data. It's just that the more data there is available to them and the more powerful computers and algorithms get, the better they will be able to model risks. And individual companies won't be able to ignore that, even if they want to. And it's the exact opposite of what insurance is intended to do: It's intended to distribute risk. The more exact insurance companies are able to model risks, the more insurance will become unaffordable to those who need it, and the cheaper it will become for those who don't need it.
Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.
I was thinking more about something along the lines of Chomsky, McLuhan, and Orwell updated to the current day. But we'll see…
Well, that very much depends on how you look at it. It might be imaginary in so far as regulation in the US possibly prevents those consequences, which is great. One might also see it as evidence that collection of personal data has risks--and that regulation might be one way to deal with those risks, at least in some cases. After all, this kind of regulation is in effect a prohibition on collecting certain kinds of personal data, even if the collection in itself is permissible, as companies won't collect data when they can't use it for anything anyway.
> Personally, I would prefer if car insurers could price discriminate more based on data. I think this would lower rates for me personally, both in the short term because I try to cultivate safe driving habits, and in the long term because it would create an incentive for everybody to try to drive more safely.
Are you sure that it would? Remember that for the insurer, it doesn't matter whether they calculate your risk (and thus your premium) correctly, what matters for them is that they aren't worse at calculating risks than the competition (i.e., the competition can't outcompete them on price or cause them to be left with a non-representative sample of their risk model), and that on average, their criteria match reality (i.e., they don't take on risks that are actually larger than they can pay for). Even if you in fact do drive more safely than the average driver (as in: at the end of your live, you will have had fewer/less severe accidents), it might happen that their predictive models group you in a different category, because characteristics of your driving behaviour that they use to categorize you are correlated with high-risk drivers. If insurers don't know how to (economically) measure why your driving behaviour is safe, it doesn't matter whether it actually is.
Also, the incentive can actually be a problem, exactly because risk models employed by ensurers tend to not be an exact representation of reality. If you have an incentive structure that does not align with reality, the incentive can end up promoting harmful behaviour. For example, one obvious proxy for safe driving habits could be lack of sudden decelaration. It's easy to measure, and generally, if you pay attention to traffic and drive with foresight, you usually will not need to brake suddenly as much as a reckless driver. So, it's probably true that both, incentivising people to not brake suddenly would have as one consequence people driving with more foresight, which should reduce accidents, and also that people who don't brake suddenly generally are a lower risk for the ensurer than those who do. However, this proxy can not distinguish whether you brake suddenly because you didn't pay attention--or because someone else didn't pay attention and surprised you. In the latter case, though, the thing to do to avoid an accident might be to brake as hard as you can. But that will be seen by your insurance as risky driving behaviour (which it most of the time is) that comes with a higher premium, so you have created an incentive for the driver to let an avoidable accident happen. Note that the driver in question won't think about this for an hour before deciding what to do, it's a gut reaction that might well be influenced by having internalized "braking hard costs money".
And also: What if you actually are a really good driver but you enjoy braking hard? What if you brake hard just for the fun of it, in situations where it's completely harmless. Is it fair if you have to pay higher premiums for that? Such incentives that work via proxy measurements of the actual risks tend to force adherence to a standard of behaviour. I find the idea frightening that insurance companies might get to dictate "safe behaviour", where the specific behaviour is not actually necessary for safety, it just happens to be easily distinguishable from risky behaviour, so behaving differently costs you money, simply because it's difficult to figure out that your behaviour is not actually risky.