The only way to make Linux containers a meaningful sandbox is to drastically restrict the syscall API surface available to the sandboxee, which quickly reduces its value. It's no longer a "generic platform that you can throw any workload onto" but instead a bespoke thing that needs to be tuned and reconfigured for every usecase.
This is why you need virtualization. Until we have a properly hardened and memory safe OS, it's the only way. And if we do build such an OS it's unclear to me whether it will be faster than running MicroVMs on a Linux host.
The only meaningful difference is that Linux containers target partitioning Linux kernel services which is a shared-by-default/default-allow environment that was never designed for and has never achieved meaningful security. The number of vulnerabilities resulting from, "whoopsie, we forgot to partition shared service 123" would be hilarious if it were not a complete lapse of security engineering in a product people are convinced is adequate for security-critical applications.
Present a vulnerability assessment demonstrating a team of 10 with 3 years time (~10-30 M$, comparable to many commercially-motivated single-victim attacks these days) can find no vulnerabilities in your deployment or a formal proof of security and correctness otherwise we should stick with the default assumption that software if easily hacked instead of the extraordinary claim that demands extraordinary evidence.
For example there is Kata containers
This can be used with regular `podman` by just changing the container runtime so there’s no even need for any extra tooling
In theory you could shove the container runtime into something like k8s
Depends I guess as Android has had quite a bit of success with seccomp-bpf & Android-specific flavour of SELinux [0]
> Until we have a properly hardened and memory safe OS ... faster than running MicroVMs on a Linux host.
Andy Tanenbaum might say, Micro Kernels would do just as well.
Seacomp, capabilities, selinux, apparmor, etc.. can help harden containers, but most of the popular containers don't even drop root for services, and I was one of the people who tried to even get Docker/Moby etc.. to let you disable the privileged flag...which they refused to do.
While some CRIs make this easier, any agent that can spin up a container should be considered a super user.
With the docker --privlaged flag I could read the hosts root volume or even install efi bios files just using mknod etc, walking /sys to find the major/minor numbers.
Namespaces are useful in a comprehensive security plan, but as you mentioned, they are not jails.
It is true that both VMs and containers have attack surfaces, but the size of the attack surface on containers is much larger.
There are VMMs (e.g. pKVM in upstream Linux) with small SLoC that are isolated by silicon support for nested virtualization. This can be found on recent Google Pixel phones/tablets with strong isolation of untrusted Debian Arm Linux "Terminal" VM.
A similar architecture was shipped a decade ago by Bromium and now on millions of HP business laptops, including hypervisor isolation of firmware, "Hypervisor Security : Lessons Learned — Ian Pratt, Bromium — Platform Security Summit 2018", https://www.youtube.com/watch?v=bNVe2y34dnM
Christian Slater, HP cybersecurity ("Wolf") edutainment on nested virt hypervisor in printers, https://www.youtube.com/watch?v=DjMSq3n3Gqs
You can create security boundaries around (and even within!) the VMM. You can make it so an escape into the VMM process has only minimal value, by sandboxing the VMM aggressively.
Plus you can absolutely escape the model of C++ emulating devices. Ideally I think VMMs should do almost nothing but manage VF passthroughs. Of course then we shift a lot of the problem onto the inevitably completely broken device firmware but again there are more ways to mitigate that than kernel bugs.
Exactly. Android pulls this off by being extremely constrained. It's dramatically less flexible than an OCI runtime. If you wanna run a random unenlightened workload on it you're probably gonna have a hard time.
> Micro Kernels would do just as well.
Yea this goes in the right direction. In the end a lot of kernel work I look at is basically about trying to retrofit benefits of microkernels onto Linux.
Saying "we should just use an actual microkernel" is a bit like "Russia and Ukraine should just make peace" IMO though.
True, by "container" I really meant "shared-kernel container".
> In theory you could shove the container runtime into something like k8s
Yeah this is actually supported by k8s.
Whether that means it's actually reasonable to run completely untrusted workloads on your own cluster is another question. But it definitely seems like a really good defense-in-depth feature.
Is there any guarantee that this "silicon support" is any safer than the software? Once we break the software abstraction down far enough it's all just configuring hardware. Conversely, once you start baking significant complexity into hardware (such as strong security boundaries) it would seem like hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course.
Intuitively there are differences. The Linux kernel is fucking huge, and anything that could bake the "shared resources" down to less than the entire kernel would be easier to verify, but that would also be true for an entirely software based abstraction inside the kernel.
In a way it's the whole micro kernel discussion again.
If you escape into a VMM you can do whatever the VMM can do. You can build a system where it can not do very much more than the VM guest itself. By the time the guest boots the process containing the vCPU threads has already lost all its interesting privileges and has no credentials of value.
Similar with device passthrough. It's not very interesting if the device you're passing through ultimately has unchecked access to PCIe but if you have a proper ioMMU set up it should be possible to have a system where pwning the device firmware is just a small step rather than an immediate escalation to root-equivalent. (I should say, I don't know if this system actually exists today, I just know it's possible).
With a VMM escape your next step is usually to exploit the kernel. But if you sandbox the VMM properly there is very limited kernel attack surface available to it.
So yeah you're right it's similar to the microkernel discussion. You could develop these properties for a shared-kernel container runtime... By making it a microkernel.
It's just that isn't a path with any next steps in the real world. The road from Docker to a secure VM platform is rich with reasonable incremental steps forward (virtualization is an essential step but it's still just one of many). The road from Docker to a microkernel is... Rewrite your entire platform and every workload!
Safety and security claims are only meaningful in the context of threat models. As described in the Xen/uXen/AX video, pKVM and AWS Nitro security talks, one goal is to reduce the size, function and complexity of open-source code running at the highest processor privilege levels [1], minimizing dependency on closed firmware/SMM/TrustZone. Nitro moved some functions (e.g. I/O virtualization) to separate processors, e.g. SmartNIC/DPU. Apple used an Arm T2 secure enclave processor for encryption and some I/O paths, when their main processor was still x86. OCP Caliptra RoT requires OSS firmware signed by both the OEM and hyperscaler customer. It's a never-ending process of reducing attack surface, prioritized by business context.
> hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course
Some "hardware" functions can be updated via microcode, which has been used to mitigate speculative execution vulnerabilities, at the cost of performance.
[1] https://en.wikipedia.org/wiki/Protection_ring
[2] https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...
It appears we find ourselves at the Theory/Praxis intersection once again.
> The road from Docker to a secure VM platform is rich with reasonable incremental steps forward
The reason it seems so reasonable is that it's well trodden. There were an infinity of VM platforms before Docker, and they were all discarded for pretty well known engineering reasons mostly to do with performance, but also for being difficult for developers to reason about. I have no doubt that there's still dialogue worth having between those two approaches, but cgroups isn't a "failed" VM security boundary anymore than Linux is a failed micro kernel. It never aimed to be a VM-like security boundary.