The inconvenient and somehow embarrassing truth for us – the malware experts – is that there does not exist any reliable method to determine if a given system is not compromised.
An old trick that researchers and implementors should breath more life into. The hardware companies like the more mutable storage for financial reasons. Customers aren't big on replacing hardware but security-focused ones might go with pluggable ROM's long as ROM's aren't changed often. Hence, correct-by-construction approaches that cause few to no defects.
I ignored the (negative) hype and looked into TPMs recently, and I encourage others to do the same.[0] They look like excellent solutions with the important exception that two of the three key hierarchies, the platform hierarchy and endorsement hierarchy, appear to be fundamentally in the vendor's and not in my control (the latter hierarchy can be disabled, as I understand it, but its functionality is then lost). It's surprising that enterprise IT would tolerate that - I'm not sure I will - but perhaps they can have the manufacturer deploy the corporation's keys in the roots of those hierarchies.
Also, the TPM's security as a trust anchor depends on its implementation. They look good in theory, but I have no idea if the various vendors actually implement them effectively.
[0] By far the best source I found is A Practical Guide to TPM 2.0 - Using the Trusted Platform Module in the New Age of Security by Arthur, Challener. It's also recommended by the Trusted Computing Group, the authors of TPM.
Joanna Rutkowska, the main developer of Qubes OS, has an article about it and probably is working hard on the implementation for x86 laptops:
https://blog.invisiblethings.org/2015/12/23/state_harmful.ht...
True. But never underestimate how common memory corruption bugs are. It's fucking embarrassing just how common they are. Look at the Project Zero tracker. Just the first page of the newest issues: "double-free", "out-of-bounds write", "use-after-poison", "use-after-free", "kernel double free", "kernel memory corruption due to off-by-one", "kernel heap overflow", "kernel uaf due to double-release", "heap-buffer-overflow"… And it's these bugs that often lead to the scariest situation for regular users, "I just visited a web page and my browser got pwned".
We solved this problem in the 90s. Try to keep up.
sigh
This is wrong. A computers behaviour, even if allowed to access "true randomness", can be determined in finitely many steps. Sure, the upper bound to the number of steps is unfeasibly big, but not without limit.
Practically, there might be no difference if you assume there is no limit, but excluding the possibility seems u justified.
This is the reason I bought a Libre 13 laptop from them - they were already certified to work well with qubes.
You can almost get a pass for being condescending ("try to keep up") if you know what you are talking about, but being both condescending AND wrong just makes you look foolish.
For example, a backdoor implanted in the disk firmware would be virtually undetectable for the vast majority of users.