zlacker

[return to "Reasonably Secure Computing in the Decentralized World"]
1. jstewa+B6[view] [source] 2017-10-27 09:53:18
>>Dyslex+(OP)
Classic Theo:

"x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.

You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."

https://marc.info/?l=openbsd-misc&m=119318909016582

◧◩
2. tptace+hH[view] [source] 2017-10-27 15:21:04
>>jstewa+B6
You want the rest of the list of architectural security features Theo also doesn't believe in? It's pretty long.

For a very long time, Theo subscribed to the philosophy that the way to get a secure OS was to keep it as simple as POSIX and historical BSD would allow him to (and no simpler) while eradicating all the bugs. Eradicating bugs is obviously a good thing, but the track record of that strategy in the real world has not been great.

That's obviously changed over the last 5 years or so, but you should be careful reflecting DeRaadt cynicism from a decade ago into modern discussions.

Qubes is surely a better bet than vanilla OpenBSD.

◧◩◪
3. jstewa+E41[view] [source] 2017-10-27 17:55:26
>>tptace+hH
We've had all the king's horses and all the kings men, working around the clock, decade after decade, applying layer upon layer of tweaks and countermeasures, and all we have to show for it is a sort of paper mache wad that no one fully trusts or understands. Fix one flaw, introduce two.

At the same time we treat the underlying hardware as inviolable because of "costs", which are probably just a drop in the bucket compared to the damage wrought by still using hardware that takes a life's work for a Linus Torvalds or a Matt Dillon to program, and even then there's still doubt about what they missed.

I just get the creeping feeling that we've got the economics backward, and that maybe it's time to do "code review" on the underlying architecture instead of investing in more bandages.

◧◩◪◨
4. nickps+ni1[view] [source] 2017-10-27 19:37:38
>>jstewa+E41
Representing high-assurance security view, Brian Snow probably said it best in his 2005 presentation:

https://www.acsac.org/2005/papers/Snow.pdf

"Given today's common hardware and software architectural paradigms, operating systems security is a major primitive for secure systems - you will not succeed without it. The area is so important that it needs all the emphasis it can get. It's the current 'black hole' of security.

The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is sharing. In the security realm, the one word synopsis is separation: keeping the bad guys away from the good guys' stuff!

So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels (i.e. side channels). We really need to focus on making a secure computer, not on making a computer secure -- the point of view changes your beginning assumptions and requirements."

Examples of those doing that in high-security field:

Memory-safety at CPU level:

https://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/

Language/type-based security at CPU level:

http://www.crash-safe.org/papers.html

Capability-security at hardware level:

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

Security via crypto additions to CPU ops:

https://theses.lib.vt.edu/theses/available/etd-10112006-2048...

Of course, gotta do a fault-tolerant architecture since broken components break the model. Several ways to do that that each involve stronger assurances of hardware plus multiple units doing same thing with voters comparing output:

http://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf

https://www.rockwellcollins.com/-/media/Files/Unsecure/Produ...

So, there's designing high-assurance, secure hardware in a nutshell. The best design will require a combo of these. There will still be RF-based leaks and interference, though. Many of us are just recommending avoiding computers for secrets in favor of paper, well-paid people, and traditional controls on that. Compare the recent leaks to what Ellsberg and others in that era had to pull off to get that much intel. Night and day.

Above tech is more about protecting stuff that has to be on a computer or in an organization that refuses to ditch them. That's most of them. At least likely to protect the integrity of systems and data to reduce impact external attacks have. That's worth a lot. Interestingly, the same CPU protections can be applied to similar separation kernels, runtimes, and/or OS's to protect everything from mobile to routers to desktops to servers. Only ones I know doing anything like this are Rockwell's AAMP7G, Sandia's SSP, Microsemi's CodeSeal, and some in smartcard industry. All proprietary or government only. Someone just has to build them for the rest of us in a way that supports FreeBSD or Linux like Watchdog and CHERI teams did.

[go to top]