If you're using your company's network, then they have every right to monitor all of the activity on it. They're trying to protect trade secrets, future plans, customer data, employee records, etc. from attackers who would use that information to do harm to the company, its customers, and its employees. If you don't want your employer to know what you're doing, then don't use the company computer or company network to do it. And while you may think that you're too tech savvy to fall prey to malware 1) not everyone at your company is, and 2) no amount of savvy will protect you from all malware, especially ones that gain a foothold through an unpatched exploit. And there's also that whole other can of worms: malicious employees.
At least from my view, it's not so much that I don't want my company to know what I'm doing, as that I don't trust their software to securely MITM all of my traffic. This thread doesn't fill me with confidence about the competency of these corporate MITM proxies. And the recent Cloudflare news doesn't help either -- they're effectively the world's largest MITM proxy, and even they couldn't avoid leaking a huge amount of "secure" traffic.
There are surely sectors where it's necessary for a company to MITM all traffic, but I think most companies will do better security-wise by not messing with TLS. It's just too hard to get right.
From a privacy perspective, it doesn't really matter if the monitoring happens centralized or not.
In the cases where I've seen strict filtering laptops were forced through VPN connections to HQ, where the gateway then decides what parts of internal and external networks they are allowed to access.
Most of the solutions I have seen for devices outside the corporate perimeter are some combination of enforced vpn and authenticated proxy that is internet accessible.
Now, because engineers are so bad at saying 'no' to the people who want SSL MITM, it's apparently become a regulatory requirement. SSL MITM might let you passively surveil your employees' Facebook Messenger conversations, but it still doesn't protect you against a malicious employee who is tech-savvy (or malware written by people who have SSL MITM proxies in mind.) They could just put the information they want to smuggle out of the network into an encrypted .zip. They could even do something creative like using steganography to hide it in family photos that they upload to Facebook. The only real solution to this is to lock down the devices that people access the network on, not the network itself.
It isn't a question of whether they're allowed to do it, it's a question of whether they should do it.
It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
And it's similarly ineffective against malware because TLS is not the only form of encryption. What difference does it make if someone uploads a file using opaque TLS vs. uploading an opaque encrypted file using MITM'd TLS? Banning encrypted files, even if you could actually detect them, doesn't work because they're often required for regulatory compliance.
It isn't worth the security cost. The situation in the article is bad enough, but consider what happens if the MITM appliance itself gets compromised when it has a root private key trusted by all your devices and modify access to all the traffic.
If you're using your company's network, then they have every right to monitor all of the activity on it.
This is tantamount to steaming open and resealing the envelopes of all physical mail. Have some god damn ethics, I'd sooner quit than snoop traffic in this manner.How so?
1. Connect to Corp Wifi
2. git clone companyapp.git
3. Connect to Employee Personal Wifi
4. Email tgz'ed companyapp
?
We opted to disable usb mass storage since cavity searches seemed a little much
I would never trust a company device, or company network, with anything I consider sensitive. Use your own device and keep it on cellular.
Also though I don't like it, employers in the US do have the right to open mail addressed to you personally if delivered to the office.
There's a good argument that it's unethical too. There are many ways where your company has to trust you instead of pervasively monitoring your doings and communications, and this should fall in the same area.
This is missing the point. Someone could plug a SATA drive directly into the motherboard, or otherwise compromise their work computer to disable the restrictions, or take pictures of documents with a camera, or bring their own computer on-site, or bring a line-of-sight wireless network device on-site, or send the data over the internet as an encrypted file or via ssh or using steganograhy and so on.
The point is that preventing data exfiltration is not a trivial task, and if you're at all serious about it then the network containing the secrets is not connected to the internet. And if it's less serious than that then it can't justify a high-risk TLS MITM device.
Exactly this is what I don't get. Since these abominations are becoming ubiquitous, surely malware writers are starting to work on workarounds? And in this case, it's as easy as setting up an SSH tunnel and running your malware traffic through that, which is a few days of work at best for a massive ROI?
This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
>They may or may not mention that if they do it wrong it will degrade the security of everything on the network.
Right, that's why you don't do it wrong. This same argument applies for any monitoring technology, like cameras. An insecure camera system actually helps a would-be intruder by giving them valuable information. So if you install cameras, you'd better do it right.
As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all? There is no security panacea, just a wide range of imperfect measures to be deployed based on your threat model and resources. And luckily, most bad guys are pretty incompetent. But to address some examples you give, and show how not all is lost:
- A large encrypted zip file is sent out of the network. Depending on what your concerns are, that could be a red flag and warrant further analysis of that machine's/user's activity.
- Software trying to circumvent your firewall/IDS is definitely a red flag. You might even block such detected attempts by default, and just maintain a whitelist for traffic that should be allowed regardless (e.g. for approved apps that use pinned certificates for updates).
> This isn't true. The TLS protocol is not a philosophy; [...]
Well, the TLS specification [1] says as the first sentence of the introduction:
"The primary goal of the TLS protocol is to provide privacy and data integrity between two communicating applications."
I think, if something is "the primary design objective of TLS", it can be said that TLS is designed to do it.
Yes, if one is determined enough, they will find a way to steal data.
> It isn't worth the security cost.
That's up for the company to decide... and apparently they have decided that it is worth the cost, regardless of what zrm, random person on the Internet, thinks.
I dunno. I know plenty of people who might want to work on an Excel spreadsheet at home over the weekend and so might e-mail it to their personal e-mail account. They would almost certainly reconsider, however, if it required copying that spreadsheet to a flash drive that they then had to hide in their ass, though.
In that case, it sounds like the device is effective after all.
Suppose you're a college dorm network. Then you can't justify TLS MITM because the risk of your MITM device actively creating a security hole that leads to all the students' bank passwords being stolen is greater than any benefit from centrally monitoring the traffic in that environment.
Suppose you're a highly classified government research lab. Then you can't justify TLS MITM because the bad guys are skilled foreign government agents and you need to isolate the network from the internet.
And there is no happy medium because the risk and cost of having all your TLS-secured data compromised scales with the target value. The higher the target value the higher the risk created by the MITM proxy, all the way up to the point that you can justify isolating the network from the internet.
What you do while on work should not be personal and thus cannot be snooped upon.
If you need to send a personal paper letter, you would go to the post office, not send it using the company's stamps, right?
Like the sibling comment said, this goes against the wording of the TLS specification, but I also think this is looking at the issue from the wrong perspective: from the perspective of the network admin rather than the user. The user does not trust the MITM proxy's fake root. Let's say you set up a corporate network and rather than just whitelisting the external IPs you trust, you give your users the freedom to browse the internet but you pipe everything through a BlueCoat proxy. Your users will take advantage of this freedom to do things like, say, checking their bank balance. When the user connects to the banking website, they will initialize a TLS session, the purpose of which is to keep their communication with their bank confidential. The user will assume their communication is confidential because of the green padlock in their address bar and the bank will assume their communication is confidential because it is happening over TLS. TLS MITM violates these assumptions. If the bank knew that a third party could see the plaintext of the communication, they probably would not allow the connection. If I ran a high-security website, I'd probably look for clues like the X-BlueCoat-Via HTTP header and just drop the connection if I found any.
> As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all?
In some cases, yeah. There are a lot of security measures out there that are just implemented to tick some boxes and don't provide much practical value. If they don't provide much value, but they actively interfere with real security measures (for example, by delaying the rollout of TLS 1.3) or they're just another point of failure and additional attack surface (bad proxies can leak confidential data, cf. Cloudflare,) they should be removed. I'll admit most bad guys are incompetent, but it's dangerous to assume they all are, because that gives the competent ones a lot of power, and someone who is competent enough to know that a network uses a TLS MITM proxy will just add another layer of encryption. (Or, like some other comments are suggesting, they'll just test your physical security instead and try to take the data out on a flash drive.)
Or plain HTTP POSTs with encrypted content. If it reject stuff that looks encrypted, plain HTTP POSTs encoding the binary files by taking a suitably sized file of words and encode it as nonsensical rants to a suitable user-created sub-reddit.
Or e-mails made using the same mechanism.
If you want low latency two way communication doing this can be a bit hard, but you basically have no way of stopping even a generic way of passing data this way unless you only whitelist a tiny set of trusted sites and reject all other network traffic (such as DNS lookups). And keep in mind you can't just lock down client traffic out of the network - you also would need to lock down your servers and filter things like DNS - the above mentioned DNS approach will work even through intermediary recursive resolvers (malware infected desktop => trusted corporate recursive resolver => internet), unless they filter out requests for domains they don't trust.
But basically, if you allow data out, it's almost trivial to find ways to pass data out unless the channel is extremely locked down.
For those who don't know, there are even full IP proxies that uses DNS [1], but you can hack up a primitive one using shell script by basically setting up a nameserver for a domain, turning on all query logging and using a shell script that splits your file up, encodes it into valid DNS labels and requests [some encoded segment].[yourdomain]. Now your file will be sitting in pieces in your DNS query log and all you need is a simple script to re-assemble it.
Best of all is that it works even if it passes through intermediary DNS servers, such as a corporate proxy, unless it's heavily filtered (e.g. whitelisting domains) or too rate limited to be useful.
I don't think so. Since when is it legal for anyone to circumvent encryption systems?
Is it legal for your ISP to do this on "their network"? Actually, I bet you think that's OK too.
(Note: I would also never trust a company device or company network, and I keep my personal devices completely separate from the company network for this reason. But I consider this a workaround for a deplorable situation, rather than just the way things are.)
This FindLaw article http://employment.findlaw.com/workplace-privacy/privacy-in-t... agrees that employers have a right to monitor communications from their devices on their networks, especially when this policy has been clearly laid out and agreed to by employees. Expectation of privacy is a major deciding factor in US law.
I'm not sure of the legality of an ISP doing this. I would hope it's illegal, but ISPs are weirdly regulated compared to, say, phone companies.
I would argue against such an approach if there are alternatives but if the organization's leaders were set on it I would engage with the process and make sure that it did not evolve into more unethical practices such as logging all traffic contents or the above banking example.