At the same time we treat the underlying hardware as inviolable because of "costs", which are probably just a drop in the bucket compared to the damage wrought by still using hardware that takes a life's work for a Linus Torvalds or a Matt Dillon to program, and even then there's still doubt about what they missed.
I just get the creeping feeling that we've got the economics backward, and that maybe it's time to do "code review" on the underlying architecture instead of investing in more bandages.
There are architectural components to our security problems (we still run systems with 1980s security models) and that needs to change.
By the way, I have no idea what "prince of bandages" means.
https://www.acsac.org/2005/papers/Snow.pdf
"Given today's common hardware and software architectural paradigms, operating systems security is a major primitive for secure systems - you will not succeed without it. The area is so important that it needs all the emphasis it can get. It's the current 'black hole' of security.
The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is sharing. In the security realm, the one word synopsis is separation: keeping the bad guys away from the good guys' stuff!
So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels (i.e. side channels). We really need to focus on making a secure computer, not on making a computer secure -- the point of view changes your beginning assumptions and requirements."
Examples of those doing that in high-security field:
Memory-safety at CPU level:
https://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/
Language/type-based security at CPU level:
http://www.crash-safe.org/papers.html
Capability-security at hardware level:
https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
Security via crypto additions to CPU ops:
https://theses.lib.vt.edu/theses/available/etd-10112006-2048...
Of course, gotta do a fault-tolerant architecture since broken components break the model. Several ways to do that that each involve stronger assurances of hardware plus multiple units doing same thing with voters comparing output:
http://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf
https://www.rockwellcollins.com/-/media/Files/Unsecure/Produ...
So, there's designing high-assurance, secure hardware in a nutshell. The best design will require a combo of these. There will still be RF-based leaks and interference, though. Many of us are just recommending avoiding computers for secrets in favor of paper, well-paid people, and traditional controls on that. Compare the recent leaks to what Ellsberg and others in that era had to pull off to get that much intel. Night and day.
Above tech is more about protecting stuff that has to be on a computer or in an organization that refuses to ditch them. That's most of them. At least likely to protect the integrity of systems and data to reduce impact external attacks have. That's worth a lot. Interestingly, the same CPU protections can be applied to similar separation kernels, runtimes, and/or OS's to protect everything from mobile to routers to desktops to servers. Only ones I know doing anything like this are Rockwell's AAMP7G, Sandia's SSP, Microsemi's CodeSeal, and some in smartcard industry. All proprietary or government only. Someone just has to build them for the rest of us in a way that supports FreeBSD or Linux like Watchdog and CHERI teams did.
I'm an embedded guy, so I'm looking from the outside in. Whenever I have to trunk something to the server room, they're usually trying to do just one thing, like e-mail (just as an example).
Of course there's an OS firewall, but you can't trust that, so you have to have another firewall, and that doesn't help so much with DDOS, so there's also cloudflare, and the firewall doesn't understand e-mail, so there has to be an e-mail pre-filter, and you can't really trust the OS to isolate things, even though that's kind-of in it's job description, so you have to have a hypervisor, and since some things are too important to trust to the hypervisor, you have an extra box or two, and now that you have a half-dozen different systems in play, there has to be some form of monitoring service. I have seen almost every layer of this melt down in one way or another and take the rest of the chain down with it, and that isn't even my job.
I just think if we had saner hardware, where we could write performant-enough code without having to dirty our hands with pointer arithmetic, memory boundaries, manual boxing and tagging, manual memory management / software-based garbage collection, etc., we'd at least be in there with a shot at writing an e-mail server that could be put straight behind cloudflare that would also let the IT guys drop their prilosec prescriptions and get eight hours of sleep every night.
edit: my main point is that PC architecture is garbage. when I wrote "code review", I meant over the silicon. Both DeRaadt and Rutkowska are putting their fingers in the dam. It's heroic, but it's also a waste of two very bright people.