I'd like to see a formal container security grade that works like:
1) Curate a list of all known (container) exploits
2) Run each exploit in environments of increasing security like permissions-based, jail, Docker and emulator
3) The percentage of prevented exploits would be the score from 0-100%
Under this scheme, I'd expect naive attempts at containerization with permissions and jails to score around 0%, while Docker might be above 50% and Microsandbox could potentially reach 100%.This might satisfy some of our intuition around questions like "why not just use a jail?". Also the containers could run on a site on the open web as honeypots with cash or crypto prizes for pwning them to "prove" which containers achieve 100%.
We might also need to redefine what "secure" means, since exploits like Rowhammer and Spectre may make nearly all conventional and cloud computing insecure. Or maybe it's a moving target, like how 64 bit encryption might have once been considered secure but now we need 128 bit or higher.
Edit: the motivation behind this would be to find a container that's 100% secure without emulation, for performance and cost-savings benefits, as well as gaining insights into how to secure operating systems by containerizing their various services.
The only way to make Linux containers a meaningful sandbox is to drastically restrict the syscall API surface available to the sandboxee, which quickly reduces its value. It's no longer a "generic platform that you can throw any workload onto" but instead a bespoke thing that needs to be tuned and reconfigured for every usecase.
This is why you need virtualization. Until we have a properly hardened and memory safe OS, it's the only way. And if we do build such an OS it's unclear to me whether it will be faster than running MicroVMs on a Linux host.
The only meaningful difference is that Linux containers target partitioning Linux kernel services which is a shared-by-default/default-allow environment that was never designed for and has never achieved meaningful security. The number of vulnerabilities resulting from, "whoopsie, we forgot to partition shared service 123" would be hilarious if it were not a complete lapse of security engineering in a product people are convinced is adequate for security-critical applications.
Present a vulnerability assessment demonstrating a team of 10 with 3 years time (~10-30 M$, comparable to many commercially-motivated single-victim attacks these days) can find no vulnerabilities in your deployment or a formal proof of security and correctness otherwise we should stick with the default assumption that software if easily hacked instead of the extraordinary claim that demands extraordinary evidence.
There are VMMs (e.g. pKVM in upstream Linux) with small SLoC that are isolated by silicon support for nested virtualization. This can be found on recent Google Pixel phones/tablets with strong isolation of untrusted Debian Arm Linux "Terminal" VM.
A similar architecture was shipped a decade ago by Bromium and now on millions of HP business laptops, including hypervisor isolation of firmware, "Hypervisor Security : Lessons Learned — Ian Pratt, Bromium — Platform Security Summit 2018", https://www.youtube.com/watch?v=bNVe2y34dnM
Christian Slater, HP cybersecurity ("Wolf") edutainment on nested virt hypervisor in printers, https://www.youtube.com/watch?v=DjMSq3n3Gqs
Is there any guarantee that this "silicon support" is any safer than the software? Once we break the software abstraction down far enough it's all just configuring hardware. Conversely, once you start baking significant complexity into hardware (such as strong security boundaries) it would seem like hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course.
Safety and security claims are only meaningful in the context of threat models. As described in the Xen/uXen/AX video, pKVM and AWS Nitro security talks, one goal is to reduce the size, function and complexity of open-source code running at the highest processor privilege levels [1], minimizing dependency on closed firmware/SMM/TrustZone. Nitro moved some functions (e.g. I/O virtualization) to separate processors, e.g. SmartNIC/DPU. Apple used an Arm T2 secure enclave processor for encryption and some I/O paths, when their main processor was still x86. OCP Caliptra RoT requires OSS firmware signed by both the OEM and hyperscaler customer. It's a never-ending process of reducing attack surface, prioritized by business context.
> hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course
Some "hardware" functions can be updated via microcode, which has been used to mitigate speculative execution vulnerabilities, at the cost of performance.
[1] https://en.wikipedia.org/wiki/Protection_ring
[2] https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...