zlacker

[return to "Why privacy is important, and having “nothing to hide” is irrelevant"]
1. tobbyb+Bl[view] [source] 2016-01-06 07:41:06
>>syness+(OP)
I think the tech crowd is in denial about their role in surveillance.

We expect professionals to behave ethically. Doctors and companies working on genetics and cloning for instance are expected to behave ethically and have constraints placed on their work. And with consequences for those behaving unethically.

Yet we have millions of software engineers working on building a surveillance society with no sense of ethics, constraints or consequences.

What we have instead are anachronistic discussions on things like privacy that seem oddly disconnected from 300 years of accumulated wisdom on surveillance, privacy, free speech and liberty to pretend the obvious is not obvious, and delay the need for ethical behavior and introspection. And this from a group of people who have routinely postured extreme zeal for freedom and liberty since the early 90's and produced one Snowden.

That's a pretty bad record by any standards, and indicates the urgent need for self reflection, industry bodies, standards, whistle blower protection and for a wider discussion to insert context, ethics and history into the debate.

The point about privacy is not you, no one cares what you are doing so an individual perspective here has zero value, but building the infrastructure and ability to track what everyone in a society is doing, and preempt any threat to entrenched interests and status quo. An individual may not need or value privacy but a healthy society definitely needs it.

◧◩
2. jerf+oG[view] [source] 2016-01-06 13:57:29
>>tobbyb+Bl
"Yet we have millions of software engineers working on building a surveillance society with no sense of ethics, constraints or consequences."

No, we don't.

We have probably a few hundred doing hard-core surveillance. We have another few thousand functioning as enablers by making social media and ad networks really attractive. We have a whole lot of non-engineers insisting on placing ads and tracking on their websites.

And then there's the mass bulk of software engineers that have nothing to do with it, and nothing they do will stop it.

50% of doctors decide to stop doing something, and it gets noticed. 99% of software engineers decide to take enormously strong stands against surveillance even at great personnel cost, and surveillance continues on as if nothing happened, except maybe those who work on it get paid a bit more to make up for decreased supply.

It may, in that weird 20th/21st century fashionable-self-loathing way, feel really good to blame the group you're a part of, but basically what you're proposing won't do anything at all. You're imputing to "software engineers" in general abilities they don't collectively have. You've got to attack it at the demand level, you will never be able to control the supply. This also matters because if you waste your energy with that approach, you might decide you've done something about the problem and stop trying when in fact you've done nothing.

◧◩◪
3. kordle+MT[view] [source] 2016-01-06 16:16:31
>>jerf+oG
Every time one of millions of developers commits code to a centralized service, they have a hand in exposing individual's data to the surveillance society we live in today. Data exposure can come from being publicly available on the site itself, being obfuscated through aggregation in the service's APIs, leaked through security holes in implementation of said services, risk of theft by hackers looking for high value targets with private data, or misappropriated by the company itself or its upstream providers. The idea that customers can somehow understand the scope of their exposure to privacy violations is laughable.

One thing which became apparent to me when I began to focus on this issue was the fact there are countless other services which provide services to other services, all of which have some degree of access to upstream customer data. For example, if you log to a hosted logging service, some of your customer's data is sent to them. If that service use AWS, then data is sent to Amazon. And so on.

http://www.stackgeek.com/blog/kordless/post/a-code-of-trust

Arguing efforts to make things better is pointless is a very dangerous thing to do, assuming we actually want things to be better. Cognitive dissonance is a powerful force, especially when there are startups to be built!

◧◩◪◨
4. TheDon+xV[view] [source] 2016-01-06 16:32:08
>>kordle+MT
That is not what's typically meant by surveillance.

Sure, it turns out using centralized web services has helped the government with things such as PRISM, but that doesn't mean we should blame people for those development practices rather than the government.

Prior to PRISM, pretty much any reasonable person would assume that the blobs you store in S3 aren't going to be looked at anyone or, worst case, metadata will be seen by AWS employees for debugging stuff.

What we have done is make things a ton better for developers; we can make things quicker and more easily which empowers society/humanity. The fact that it's incidentally contributed to a surveillance society through no intent of the developers in a way you wouldn't reasonably expect does not make the developers culpable.

It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?

The right thing to do here is to call for better use of encryption where possible and, for surveillance issues, to reign in the unreasonable government programs that make this practice result in such problems.

◧◩◪◨⬒
5. kordle+442[view] [source] 2016-01-07 03:31:10
>>TheDon+xV
> It is true that customer data is trusted with a lot of services-of-services nowadays, but do you want to go back to the stone age where the only people who can store anything must run their own hardware with their own databases and so on?

This statement is in conflict with itself logically. It's arguing that diminished trust levels for data are rationalized to achieve a savings in time and cost to run the infrastructure for the application. The conflict comes about when you start assuming the data has acceptable levels of trust requirements for a given customer. The fact is, you can't speak for my trust levels, which is exactly what is being discussed in the link.

I get to say what trust levels I want for my software and data. Not being able to use the software because I can't trust it is an unacceptable proposition, so I challenge our abilities to build something better than what we have today, and do so without rationalizing why we aren't building it.

[go to top]