"x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.
You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."
Unless the physicists find some way to breathe new life into Moore's law, I suspect we will gradually move back to the mainframe approach of implementing more and more of the low-level stuff directly in hardware.
Theo using lots of clever words to call someone stupid isn't a refutation of this. Even if both layers have holes, the fact that there's more than one layer does, in fact, suggest the composition is more secure.
Security guys have been going on about "defense-in-depth" for decades, and it all still looks like a trash fire to me.
From a systems perspective, you don't make things more robust by adding more layers that can break. You do it by simplifying it down to something manageable, then managing it.
You call it a security layer. I call it an extra attack surface.
Hypervisors do not add security by themselves. But they make it possible to implement security by isolation cheaply.
Cheaply means: 1) preserving backward compatibility with apps & drivers, 2) with drastically reduced attack surface due to smaller APIs. (Note that the HVM hypercall API isn't very big. Mostly physical memory ops, vCPU ops, physdev stuff, evchans and sched-related stuff.[2])
[1] : https://twitter.com/rootkovska/status/843031083398692866
I'd say she is well aware of the limitations of her product.
Power series is a possible contender, but then you see the glory that is IBM behind it, and you know that no one wants to deal with that on the larger time scale (swap Intel for IBM? Why do this?)
Sparc is all but dead. MIPS is effectively dead. I've heard good things on RISC-V, though the question is, who will want to produce a non-differentiated CPU, when others can do this as well ... that is, you can't really extend RISC-V unless you break the ISA.
Then there are the toolchain issues.
Having experienced the Calxeda failure as a partner, realizing that the ARM marketing claims of low power Intel replacement were complete nonsense[1], I am not all that interested in climbing back on that particular heavily hyped horse.
[1] https://scalability.org/2013/12/the-evolving-market-for-hpc-... search for ARM.
ARM32 has "ARM hell" with multiple extensions but they've mostly fixed this in ARM64. You can definitely have compatible ABIs at the level users care about, namely application binaries.
Multiple vendors means price competition too.
IMHO the major stumbling block for ARM64 vs. X64 is that X64 has so much existing market share. Installed user-base is very powerful and everyone knows X64 will work so why take the chance? Hardware is cheaper than IT person-hours.
If someone fielded an X64-competitive ARM64 multi-core chip at a competitive price point it would probably get some traction.
Even back in the 70s, guys like Minsky and Kay knew that bending the man to accommodate the machine was not the way to go about it. x86 is even worse than that because we're bending the man to a half-baked machine that is the result of a collection of historical missteps committed by guys who were geniuses at chemistry and physics, but amateurs at computing.
Then to add insult to injury, in the early days of the PC, IBM drug their feet and tried to hobble the thing enough so that it wouldn't eat into their mainframe sales. I believe that was part of why Microsoft parted ways with them.
In light of current circumstances, Rutkowska has developed a solution that's arguably more than just "reasonably" secure.
Hopefully the next platform shift will also iron out that ugly little wrinkle of centralized control.
For a very long time, Theo subscribed to the philosophy that the way to get a secure OS was to keep it as simple as POSIX and historical BSD would allow him to (and no simpler) while eradicating all the bugs. Eradicating bugs is obviously a good thing, but the track record of that strategy in the real world has not been great.
That's obviously changed over the last 5 years or so, but you should be careful reflecting DeRaadt cynicism from a decade ago into modern discussions.
Qubes is surely a better bet than vanilla OpenBSD.
Is there a concrete reason you believe that or just a gut feeling?
It's pretty obvious to anyone really. Even if you assume that OpenBSD had 0 bugs, it doesn't protect you if someone exploits your browser, or some application that you have, which may have a lot of bugs. In contrast, tou can create two isolated OpenBSD VMs in Qubes, one where you do your banking related activities (and setup the firewall to only allow connections to your bank for example), and the other where you do your browsing, so that even if someone pwns your browser in the second VM they won't be able to steal your banking logging credentials - unless they have a Xen exploit of course.
https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
At the same time we treat the underlying hardware as inviolable because of "costs", which are probably just a drop in the bucket compared to the damage wrought by still using hardware that takes a life's work for a Linus Torvalds or a Matt Dillon to program, and even then there's still doubt about what they missed.
I just get the creeping feeling that we've got the economics backward, and that maybe it's time to do "code review" on the underlying architecture instead of investing in more bandages.
There are architectural components to our security problems (we still run systems with 1980s security models) and that needs to change.
By the way, I have no idea what "prince of bandages" means.
https://www.acsac.org/2005/papers/Snow.pdf
"Given today's common hardware and software architectural paradigms, operating systems security is a major primitive for secure systems - you will not succeed without it. The area is so important that it needs all the emphasis it can get. It's the current 'black hole' of security.
The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is sharing. In the security realm, the one word synopsis is separation: keeping the bad guys away from the good guys' stuff!
So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels (i.e. side channels). We really need to focus on making a secure computer, not on making a computer secure -- the point of view changes your beginning assumptions and requirements."
Examples of those doing that in high-security field:
Memory-safety at CPU level:
https://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/
Language/type-based security at CPU level:
http://www.crash-safe.org/papers.html
Capability-security at hardware level:
https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
Security via crypto additions to CPU ops:
https://theses.lib.vt.edu/theses/available/etd-10112006-2048...
Of course, gotta do a fault-tolerant architecture since broken components break the model. Several ways to do that that each involve stronger assurances of hardware plus multiple units doing same thing with voters comparing output:
http://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf
https://www.rockwellcollins.com/-/media/Files/Unsecure/Produ...
So, there's designing high-assurance, secure hardware in a nutshell. The best design will require a combo of these. There will still be RF-based leaks and interference, though. Many of us are just recommending avoiding computers for secrets in favor of paper, well-paid people, and traditional controls on that. Compare the recent leaks to what Ellsberg and others in that era had to pull off to get that much intel. Night and day.
Above tech is more about protecting stuff that has to be on a computer or in an organization that refuses to ditch them. That's most of them. At least likely to protect the integrity of systems and data to reduce impact external attacks have. That's worth a lot. Interestingly, the same CPU protections can be applied to similar separation kernels, runtimes, and/or OS's to protect everything from mobile to routers to desktops to servers. Only ones I know doing anything like this are Rockwell's AAMP7G, Sandia's SSP, Microsemi's CodeSeal, and some in smartcard industry. All proprietary or government only. Someone just has to build them for the rest of us in a way that supports FreeBSD or Linux like Watchdog and CHERI teams did.
I'm an embedded guy, so I'm looking from the outside in. Whenever I have to trunk something to the server room, they're usually trying to do just one thing, like e-mail (just as an example).
Of course there's an OS firewall, but you can't trust that, so you have to have another firewall, and that doesn't help so much with DDOS, so there's also cloudflare, and the firewall doesn't understand e-mail, so there has to be an e-mail pre-filter, and you can't really trust the OS to isolate things, even though that's kind-of in it's job description, so you have to have a hypervisor, and since some things are too important to trust to the hypervisor, you have an extra box or two, and now that you have a half-dozen different systems in play, there has to be some form of monitoring service. I have seen almost every layer of this melt down in one way or another and take the rest of the chain down with it, and that isn't even my job.
I just think if we had saner hardware, where we could write performant-enough code without having to dirty our hands with pointer arithmetic, memory boundaries, manual boxing and tagging, manual memory management / software-based garbage collection, etc., we'd at least be in there with a shot at writing an e-mail server that could be put straight behind cloudflare that would also let the IT guys drop their prilosec prescriptions and get eight hours of sleep every night.
edit: my main point is that PC architecture is garbage. when I wrote "code review", I meant over the silicon. Both DeRaadt and Rutkowska are putting their fingers in the dam. It's heroic, but it's also a waste of two very bright people.