I get the issue with Pluton but TPM is only a dedicated and certified secure key and random number generator that does a better job than CPUs doing it in software, and it's also a secure enclave for storing your encryption keys. Would you rather store the keys in memory where they can be easily grabbed by malicious apps like Mimikatz? Macs had the same feature for years in the T2 chip.
It's the exact system that enables wireless payment and other strong security features on your phone.
So having TPM on PCs and using it for its interested purpose is a boon for everyone's security so I don't see the issue, just FUD.
So in worst case, if your attestation server is very strict, any new binary installed on your machine will prevent it from booting or satisfying the attestation. This is the main concern that TPM enables.
That is a bit misleading. The TPM is a passive device, it cannot verify any state. It is the OS who measure the system (in Linux via the IMA system). And is the Linux kernel the one that, if you have a TPM, can produce a process where a 3rd party can be sure that the measurements are "true" and "legit" (via PCR#10 extension).
As you state later, it is this 3rd party the one that assert (verify) if you are state considered OK or not.
Maybe I am too simplistic, but I do not see the evil in the TPM here, but only in the 3rd party policy.
TPM can be abused but, as a developer, I am happy that we can use the TPM for good and fair goals in open source projects.
It is the user who can decide to use the TPM or not, and should be noted that in the TCG specification it is stated that the TPM can be disabled and cleared by the user at any moment.
The evil is that the "Trusted" in "Trusted Computing" and "Trusted Platform Module (TPM)" means that one deeply distrusts the user (who might tamper with the system), but instead the trust lies in the computing (trusted computing) or TPM. In other words: Trusted Computing and TPM means a disempowerment of the user.
Sure Infineon can probably get my data, but that's far beyond the scope of my threat model.
As long as the system is open to putting your own keys on there I'm fine with it.
Sure, there are theoretical attacks on memory, but they are far less relevant for security than the penalties I have to accept with TPM being widely established.
Not that there aren't different means, but TPM also creates unique hashes of your system which only reinforces the problems around fingerprinting.
> It's the exact system that enables wireless payment and other strong security features on your phone.
Phones suck as computing devices on every conceivable metric and are heavily locked down devices. And it is not true that you need a TPM chip to create secure transfers. I constantly do business transaction on my PC just fine.
As long as software that uses the TPM cannot detect whether you tampered with the TPM or not, it is principally all right.
But as I wrote down: this is exactly the opposite of what trusted computing was invented for: make the machine trustable (for the companies that have control over the TPM/trusted computing), because the user is distrusted.
I would rather argue that it converges to "you become more and more morally obliged to learn about hacking (and perhaps become a less and less law-abiding citizen) if you buy a computer and use it".
You're thinking of SGX enclaves not TPM.
> TPM also creates unique hashes of your system
It doesn't. Your system creates hashes and appends to lists signed by TPM. And the point of those hashes is to be not unique, but verifiability matching known values.
Yea, maybe we shouldn't live in the US, or other authoritarian nations, but few of us have options like that.