In this case, it doesn't sound like they're reverting it because of overall breakage, but rather because it breaks the tool that would otherwise be used to control TLS 1.3 trials and other configuration. Firefox had a similar issue, where they temporarily used more conservative settings for their updater than for the browser itself, to ensure that people could always obtain updates that might improve the situation.
Good grief! From David Benjamin's final comment:
Note these issues are always bugs in the middlebox products. TLS version negotiation is backwards compatible, so a correctly-implemented TLS-terminating proxy should not require changes to work in a TLS-1.3-capable ecosystem. It can simply speak TLS 1.2 at both client <-> proxy and proxy <-> server TLS connections. That these products broke is an indication of defects in their TLS implementations.
It's understandable that I've never heard of BlueCoat: clearly this product's success is based more on selling to executives than on quality, and it has been some time since I worked in an organization that had executives to sell to.
It sounds like a perfectly reasonable behaviour if the goal is to "fail closed", to provide more security in a fashion similar to a whitelist.
If it sees that it's TLS, it should attempt a protocol downgrade.
I don't remember the exact details but I recall reading that TLS has a mechanism to prevent version downgrades, precisely to defend against such "attacks", so the connection would not succeed in that case either.
The way it is supposed to work is as following: there is a protocol negotiation when the connection is established (which is obviously unencrypted), which contains TLS version supported. If MITM proxy does not understand the version, it can just change these bytes to force hosts to negotiate at a lower version.
So the only reason BlueCoat fails is because the authors failed to implement force version downgrade.
This reminds me of firewalls that weaken security by filtering unrecognized HTTP headers: https://news.ycombinator.com/item?id=12655180
If merely advertising 1.3 while still advertising older versions causes blue coat to break, it has a bug in TLS version negotiation.
There is no downgrade or whitelist or failing closed. Each end says what they support and BlueCoat blows up the connection if it sees that the other end supports a newer version. It should say "oh we both support 1.2 let's use that" And apparently it's done this before so there's even less an excuse for it.
[1] https://en.wikipedia.org/wiki/Blue_Coat_Systems#Use_by_repre...
Quite a pain to work in such environments.
In the case of a security appliance -- such as this -- it should, in my opinion, "fail closed".
https://jhalderm.com/pub/papers/interception-ndss17.pdf
How do you fix this when you're naught but a humble employee? Well, a friend of mine worked at a fairly large tech company where a salesguy for these boxes had convinced the CTO they had to have them. Every tech-person "on the floor" hated the idea, so before the boxes were installed they conspired on their free time to write some scripts that ran lots of legitimate HTTPS traffic, effectively DDOSing the boxes and bringing the company's internet to a crawl for the day, like Google would take ten seconds to open. Then obviously everyone (including the non-tech people) started calling the IT helpdesk complaining that the internet was broken. MITM box salesguy then had to come up with a revised solution, costing 20x more than his first offer, and that was the end of that.
If you already are suffering under MITM boxes, a similar strategy with a slow ramp-up in traffic might work.
The RFC (which if you're implementing TLS, you should have open at all times) explicitly calls out exactly this behavior:
> Note: some server implementations are known to implement version negotiation incorrectly. For example, there are buggy TLS 1.0 servers that simply close the connection when the client offers a version newer than TLS 1.0.
The quality of this vendor's implementation is extremely suspect.
Even protocol state (equivalents of TCP FIN/SYN/etc) is encrypted, to ensure that middleboxes don't get ideas about what the protocol is 'supposed' to do - ideas which make it hard to change the protocol in the future.
This isn't "failing closed", and this isn't a whitelist. TLS allows you to whitelist to certain versions of the protocol during the initial negotiation at the start of the protocol; that is the opportunity for either end to state what version of the protocol they'd like. It is not permissible in the protocol to close the connection as Blue Coat is doing.
This isn't a downgrade attack, either: both server and client are free to choose their protocol version at the beginning. The client & server will later verify that the actual protocol in use is the one they intended; this is what prevents downgrades.
That's the simplest explanation, though, so that's probably what happened. Oh well.
In corporate environments, the last thing that changes is the thing that gets blamed. BlueCoat was not upgraded, Chrome was, and now things are broken? Not their fault.
It then simply inspects a connection it doesn't understand and 'fails closed' by preventing that connection.
Which holds trusted secret keys and which, in its normal unremarkable operation, intercepts, parses, reconstructs, decrypts, re-encrypts, forwards, and optionally logs both confidential and attacker-controlled traffic? And is also known to be used for nationwide bulk internet censorship by regimes often called 'oppressive'?
Why, doesn't it just.
Please consider, very carefully, the ethics and equities issues one might face with any interesting findings here.
This isn't just a fireable offense. Especially given the tendency for computer-related criminal laws to be overly vague, it's entirely possible you could be charged with a crime if you are intentionally trying to DoS your employer's network.
TBH, for most techies I don't think opposition to MITM boxes comes down to "I don't want them to catch me looking at cat photos" but more along the lines of "this will actually reduce security as much as it improves it, and the companies providing these products are also aiding repressive regimes and human rights violations across the globe". Personally, I would find it unethical for the company I work for to buy these products.
Incidentally, "Blue Coat ProxySG 6642" was the only middlebox to get an "A" from the study referenced above. Apparently they didn't test for 1.3...
There are hundreds of thousands of organizations that need inspection and caching and proxying of internal www traffic. That all protocols should disallow or frustrate this disregards real needs of users and organizations.
Further still, if protocols can't be designed to be implemented easily or to allow for implementation bugs or lack of features, it's a crap protocol or application. Middleware will always be necessary, and encryption really shouldn't change the requirements of how middleware needs to work with a protocol.
In truth though if you start considering your employees like the enemy it's just a never ending upwards battle, especially if your employees are comp-sci folks. You could tunnel SSH over HTTP or even DNS if you cared enough.
Then in firefox (or other), the socks proxy is on localhost port 4242.
Then leave the company in protest or convince it not to buy them. DDoSing the company's network is somehow not unethical, I guess?
My point is that actually helping this particular vendor, for example, may not be everyone's cup of tea.
It's pretty entertaining to read this stack overflow questions about using ssl from 7 years ago: http://stackoverflow.com/questions/2177159/should-all-sites-...
Similarly, good security people know that port filtering is a losing game unless you are willing to restrict everything to a known-safe whitelist – the malware authors do work full-time on tunneling techniques, after all – and may be focusing their efforts on endpoint protection or better isolation between users/groups.
I need my personal email to do my work. It needs to stay secure from even my own employer. Period.
IOW, it's completely fair to argue that users might not have a universal right to encryption, but it's just as legitimate to argue that browser vendors have no obligation to enable the trivial circumvention of encryption. If the software doesn't work for your needs, then stop using the software.
The middleware should require effort to install, and it should be obvious when it is active. Otherwise, companies which have no business MITM'ing the traffic -- such as ISPs and free wifi providers -- will start to do it just because it's so simple.
For example, Google may require MDM app on the Android devices which is used to access corporate data. This app ensures that the device has the right policy (screen lock, encryption) and I think it may also check for malware apps. This is how it should be -- if you need corporate control over devices, install special application on it, it will be more efficient and it will do more.
While there was previously this "TLS fallback" implemented in Chrome to work around buggy endpoints, this was primarily due to buggy endpoints* which was a much larger issue and difficult to fix, while these middlebox issues affect a much smaller portion of users and we're hopeful that the middlebox vendors that have issues can fix their software in a more timely manner.
* TLS 1.3 moves the version negotiation into an extension, which means that old buggy servers will only ever know about TLS 1.2 and below for negotiation purposes and won't break in a new matter with TLS 1.3.
Nobody made that argument. But browser makers have an obligation to keep the world wide web usable. If it's not usable, say goodbye to dot com companies selling services to businesses, which aside from advertising revenue (and the hopes and dreams of venture capitalists) is the only way they survive.
The only reasonable alternative if you start locking out legitimate business use cases of traffic inspection is to abandon the web and start making proprietary native applications and protocols like back in the old days. This is bad for users and bad for business.
It's not like it's even hard to support these use cases while maintaining user security! Browsers just totally suck at interfacing with a dynamic user role. Better UX and a more flexible protocol would solve this, but nobody wants to make browsers easier to use (more the opposite)
Collective action (strikes, "work slowly protests" etc.) as a protest against company policy has a long precedent of a) being protected by law and b) being much more effective than a single employee quitting, while simultaneously reducing the downside for employees (in L_\infty norm).
Edit: the old Keynes quote comes to mind: "if you owe the bank $100 you have a problem, but if you owe the bank $100 million the bank has a problem" -- if 1 of the company's devs commits a "fireable offense", he/she has a problem, but if 100 of them do, the company has a problem.
In any case you are left with no SSH, or somebody watching your ssh and have control over your ability to tunnel.
The best you can do with these boxes is make a sub tunnel over one of the protocols that they do allow through, you just can't rely on the primary encryption provided by the protocol that the middle box is executing MITM on. If somebody actually looks at the traffic they will see that you are not transferring plain text at the middle box, so that might raise some eyebrows.
Everything you listed is information that the company already has access to. Why isn't it sufficient for there to be access controls by policy, the same way the company protects other sensitive information from unauthorized acres within the company?
For instance if your policies are too restrictive people will use their smartphones more and more to access the internet. Then some will start doing work stuff on their smartphones and you lose all control. What do you do then? Forbid smartphones within the company? Fire everybody you catch using one? It's just an arms race at this point.
Sane security measures and some pedagogy go a long way. Easier said than done though, it's a tough compromise to make.
As a regular user, I can't just use a captive portal to get free wifi, because any site I go to has HTTPS, so they all break and I can't accept the god damn HTTP accept page unless I can conjure up a valid domain that has no HTTPS like I'm Svengali. Now all the OSes have special checks to see if there's a captive portal because the browsers couldn't be troubled to build a function for it, even though it would improve their security and usability at the same time.
Captive portals are not the enemy. Shitty UX and a bad attitude toward the needs of real users is. Locking browsers/protocols down more is just doubling down on this mentality.
Then continue to use the encryption that exists today. After all, your concern is for future standards that make encryption stronger.
> And fork a browser? Are you nuts?
A lone user forking a browser would be nuts. A company that's already willing to pay through the nose for MITM proxies can afford to fund a minor browser fork. Indeed, if this use case is as important as you suspect, then you ought to start a company that sells customized browsers for exactly this purpose. Think about what site you're on; where's your entrepreneurial spirit? :)
> But browser makers have an obligation to keep the world wide web usable.
Usable for whom? Between users (who need strong encryption), websites (who need strong encryption), and corporate intranets (who need to snoop), whose needs ought to be prioritized?
> abandon the web and start making proprietary native applications
The web emerged from a world where all applications were native and proprietary, I don't think any browser vendor is losing sleep over this possibility.
> Browsers just totally suck at interfacing with a dynamic user role.
Again, sounds like there's demand for a new browser then. :)
> nobody wants to make browsers easier to use (more the opposite)
Why is that?
Every non-Microsoft browser vendor used to cry themselves to sleep at night from days fighting against vendor lock-ins and corruption of standards. They certainly care if it all goes south.
I suppose people don't want easier browsers because they imagine they are easy enough and can't imagine something better. At least I hope that's the reason, and not that they fear change, or are indifferent to the needs of people other than themselves and prefer to design for that alone.
There's no way in hell I'm crazy enough to make a browser, though. I'd rather run for elected office, or eat an entire Volkswagen Golf.
Were you writing a forum post? It was lost because it got submitted to captive portal. Were you in dynamic app? It crashed because it got HTML instead of JSON. Does you page reload ads in the i-frames? Your page position and page itself is now lost, and you are in captive portal page. Did you have a native app which did HTTP requests and cached them? Congrats, you have invalid data in your cache now.
And I have seen captive portals which were broken. How about you get redirected to login page every time you go to your favorite website, because the redirect page got cached somehow?
Good riddance. Yes, the browsers should include better captive portal support like android does, possibly triggered by the SSL certificate mismatch errors, but even the current situation when I have to type "aaa.com" by hand all the time is great.
While unfortunately for TLS client certificates are not a solution against MITM due to their awful user experience and privacy concerns, for SSH public key authentication has a good user experience, and is very common.