That is a bit misleading. The TPM is a passive device, it cannot verify any state. It is the OS who measure the system (in Linux via the IMA system). And is the Linux kernel the one that, if you have a TPM, can produce a process where a 3rd party can be sure that the measurements are "true" and "legit" (via PCR#10 extension).
As you state later, it is this 3rd party the one that assert (verify) if you are state considered OK or not.
Maybe I am too simplistic, but I do not see the evil in the TPM here, but only in the 3rd party policy.
TPM can be abused but, as a developer, I am happy that we can use the TPM for good and fair goals in open source projects.
It is the user who can decide to use the TPM or not, and should be noted that in the TCG specification it is stated that the TPM can be disabled and cleared by the user at any moment.
The evil is that the "Trusted" in "Trusted Computing" and "Trusted Platform Module (TPM)" means that one deeply distrusts the user (who might tamper with the system), but instead the trust lies in the computing (trusted computing) or TPM. In other words: Trusted Computing and TPM means a disempowerment of the user.
Sure Infineon can probably get my data, but that's far beyond the scope of my threat model.
As long as the system is open to putting your own keys on there I'm fine with it.
As long as software that uses the TPM cannot detect whether you tampered with the TPM or not, it is principally all right.
But as I wrote down: this is exactly the opposite of what trusted computing was invented for: make the machine trustable (for the companies that have control over the TPM/trusted computing), because the user is distrusted.
I would rather argue that it converges to "you become more and more morally obliged to learn about hacking (and perhaps become a less and less law-abiding citizen) if you buy a computer and use it".
Yea, maybe we shouldn't live in the US, or other authoritarian nations, but few of us have options like that.