Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
If the vendor wants to install some self-built OS that they trust on their computer and not update it for 5 years, that's their business, but I may not want to trust their computer to have access to my personal data.
Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised). This is useful for more than just DRM.
> I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.
However, I'm more worried about the questions the increased deployment of technology will bring, such as will Linux users be doomed to a CAPTCHA onslaught being the untrusted devices, or worse. Important questions that, unless raised, risk us just "going with the flow" until it is way too late.
I understand the mechanics in a "lies to children" way but who exactly is attesting what? Let's face it: MS isn't going to compensate me for a perceived flaw in ... why am I even finishing this sentence?
I recently bought some TPM 2.0 boards for my work VMware hosts so I could switch on secure boot and "attestation" for the OS. They are R630s which have a TPM 1.2 built in but a 2.0 jobbie costs about £16.
I've ticked a box or three on a sheet but I'm not too sure I have significantly enhanced the security of my VMware cluster.
Entities (ab)using remote attestation in order of 'screws over those below them':
Government > Cyber criminal groups > Large organizations > Normal people.
Do you want to live in a world where a large corp can dictate which $VERSION of $APPROVED_SOFTWARE you should be running? I think fundamentally it's just not the direction we should be going. I don't actually doubt that proper remote attestation eventually would be possible, but before then it will be possible to bypass it in countless ways. Probably eventually you'd end up with only a single software stack, assumed to be flawlessly secure.
I think, luckily, this will severely limit the usability of the technology that can work in this way. Developing for this stack will be a pain, the machine will have all sorts of super annoying limitations: can't use that display the driver is not vetted, can't use that USB webcam it might have DMA, etc. That will hopefully harm the uptake of such technologies.
Like often in tech remote attestation in your case is a technical fix for a social problem. If the problem is sharing sensitive data with institutions you don't trust then you need to build that trust, or transform the institutions so that they can be trusted. Transparency, laws, oversight, that type of stuff.
No.
Contrarily unpopular opinion: You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish. They may tell you what they will and will not do, but it is entirely on you to trust them.
...and that's the peril of things like remote attestation and other "zero trust" crap. They replace the nuanced meaning of trust that holds society together (and has literally done so since the beginning of life) with absolutes enforced by an unrelenting machine controlled by some faceless bureaucracy which is also partly under the command of the government. There should already be enough dystopian sci-fi to convince everyone why that is a really bad idea.
We've already seen shades of this in banking. After chips were added to credit cards, people started having their chargebacks denied because "our records show the card was physically present" (even if the charge originated in another country)
How long until companies try to deny responsibility for data leaks because "our records show Windows was fully up-to-date and secure"
You can mandate whatever remote attestation you want, and they'll follow whatever security practices they damn well feel like and you can't do a damn thing about it. So, you've given up your ability to run software that doesn't spy on you, and they're operating business as usual because they don't have a single goddamn reason to care what you think remote attestation mean in the real world.
This is way more than just about not watching movies in 4k that you could also pirate. This is about turning people who don't have "trusted computing" devices that track every behaviour of theirs into societal outcasts.
Let's say I'd like mandatory disclosure on shenanigans like that, so I can avoid this healthcare provider.
Yeah, because transferring that data into another machine is an impossible task.
That's the stupidest argument I heard today...
Yes, dear Windows, you're running on a dual-core Xeon Gold 6326 with i440BX chipset. Don't ask how this is possible, just trust me...
Am I wrong about the effectiveness of this? I'll readily admit I don't understand most of the underlying tech here.
as defined by whom? Some government (which one) organization ?
This will end up making everything more ossified and less secure.
But also once that is in place, various organizations and goverments will be able to force you to use whatever spyware they want, in order for your attestation to go through.
That’s a huge caveat.
You also cannot verify your trust is deserved, and that it will continue to be deserved, because such a system by its very nature must be opaque to untrusted parties (which means you).
Good luck finding a provider that doesn't ship your sensitive medical data out to an EMR company though.
Quick edit to answer my own question: In my home state paper prescriptions are only legal in a few situations (if it's for an animal, glasses, justifiable emergencies). However in some parts of the country they're still possible. Even if I had a choice, I prefer the convenience of sending the data digitally- once you actual fill the paper prescription CVS or whoever is still gonna be able to glean sensitive medical info, so you're just delaying the inevitable.
Partially. For online attestation you'd be missing the most important part. The vendor signed keypair that is insanely hard to extract from the device.
No, you can attest to a completely open source system. Nobody's actually doing that, but it's possible. The private keys have to be secret and non-extractable, but that's it.
No, because this still doesn't mean my data is secure. A human can still go into the third party vendor's system and see my data, and if that human wants to steal it, they can. No amount of remote attestation will prevent that.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines
Oh, really? So every time my health care provider wants to send my data to a third party, a remote attestation confirmation box will pop up on my phone so I can say yes or no, or ask more questions about the third party vendor's computers?
Ultimately the problem here is trust, and trust is a social problem, and as the saying goes, you can't use technology to solve a social problem. But you sure can pretend to be using technology to "solve" a problem in order to get people to give up more and more control over their devices.
This is why consumer protection laws are more important than any technical means of financial security. Having a super duper hardware wallet to store your cryptocurrency doesn't negate the irreversible nature of transactions.
Raw data is even harder to secure than money. Money in a banking system can be clawed back or frozen. Data can't be un-leaked.
Caveat is that security only extends into the kernel image, so for my use case I embed the initrd in the kernel image and have all the filesystems and swap on a dm-crypt volume.
I also have to unseal and reseal when performing upgrades of the initramfs and above, but I'm fine with that.
But in a future world it's not hard to imagine the vendor software running in some sort of SGX-like environment that is very difficult to manually extract the data from.
I trust myself more than I trust anyone or anything else. It's as simple as that. I don't even slightly trust Microsoft, Google, or Apple.
Your logic is built on an invalid premise that these companies can, in fact, be trusted.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised).
This is exactly what I want to avoid. It's my device. It should only ever serve me, not anyone else, including its manufacturer and/or OS developer. It should not execute a single instruction that isn't in service of helping me achieve something.
Also, the concept of ownership can simply not be applied to something that does not obey the physical conservation law, i.e. can be copied perfectly and indefinitely.
Good luck getting your x86-64 windows kernel + chrome JavaScript exploit chain to run on my big endian arm 64 running Linux and Firefox.
(Also, the existence of competition like that regularly forces all the alternatives to improve.)
I'd prefer it to not be run a computer which has already been compromised with a UEFI rootkit which is what trusted computing has gotten us so far.
It's totally fine if it's used to empower and protect us, normal people. If we can use this with our own keys to cryptographically prove that our own systems haven't been tampered with, it's not evil, it's amazing technology that empowers us.
What we really don't need is billion dollar corporations using cryptography to enforce their rule over their little extra-legal digital fiefdoms where they own users and sell access to them to other corporations or create artificial scarcity out of infinite bits. Such things should be straight up illegal. I don't care how much money it costs them, they shouldn't be able to do it.
The problem is we have chip makers like Intel and AMD catering to the billion dollar corporation's use case instead of ours. They come up with technology like IME and SGX. They sell us chips that are essentially factory-pwned by mega-corps. My own computer will not obey me if it's not in some copyright owner's interest to do so and I can't override their control due to their own cryptographic roadblocks. Putting up these roadblocks to user freedom should be illegal.
While it might be theoretically possible to assert the whole network is up to date, the hospitals will definitely fail that check. There's all sorts of equipment the hospitals can't update for various reasons, such as aging radiology machines.
It's much, much worse with mobile devices. You can re-lock the bootloader on a Pixel with your custom key, but you still can't touch TrustZone and you'll still get a warning on boot that it's not running an "official" OS build.
Completely agree. These outdated notions of information ownership are destroying free computing as we know it. Everything "hacker" stands for is completely antithetical to such notions.
I've read once about the hardware tricks DRM dongles use in the silicon itself. Doesn't sound like a 40 job :^)
That being said, extending it to everyone in a way that curtails individual control of computing devices creates an environment that is dangerous in many ways. I don't want to be in a world where only "approved" software is allowed on my computer or something. This can get wrong really quickly, and a lot of the application of attestation technology for consumers is really just about removing their freedoms.
The place where the government should step in IMO is not to ban CPU vendors from implementing this, but to pass anti-discrimination laws, so ban companies from requiring remote attestation to unlock some specific feature. They should maybe endorse it, or be allowed to warn you, but they should still allow full access regardless.
For the B2B setting there are obvious dangers of monopoly abuse, here the government just needs to enforce existing laws. Microsoft dropping the requirement that the signing key for third parties has to be trusted is IMO a major antitrust violation.
I have a rooted Android phone and I had to spend effort spoofing attestation in order to even launch some of my games which don't even have multiplayer. Allow me to be the one to tell you that I do not appreciate it.
I don't even care enough to cheat at these games but if I wanted to cheat it would be merely an exercise of my computer freedom which nobody has any business denying.
Get the government to regulate the corporations requiring it. Classify any attestation requirement as discrimination or something. They're excluding people without good reason.
I'd rather be able to access it without google or microsoft sticking their nose in.
I'd rather be able to combine it with my other data in whatever ways I see fit.
I'd rather be able to back it up in whatever way I see fit.
I'd rather be able to open it on a device that doesn't have a backdoor provided by the US government.
Because it's not microsoft'sor qualcomm's data, it's mine.
Who needs espionage or lobbying when you have an undetectable root shell on every computer in the country?
The only reason such hardware is secure is because the resources required to hack it are large.
Basically, a sane system would be: two parties exchange their own TPM keys which they generated on device themselves. They agree to a common set of measurements they will use with their TPMs to determine if they believe the systems are running normally. They then exchange data.
What's happening instead: a large company uses its market position to bake in it's own security keys, which the user can't access or change. They then use their market position to demand your system be configured a specific way that they control. Everyone else suborns to them because they're a big player and manufacturing TPMs is complicated. They have full control of the process.
The essential difference is that rather then two individuals establishing trust, and agreeing to protocols for it - secured with the aid of technology, instead one larger party seizes control by coercion, pretends it'll never do wrong, and allows people to "trust" each other as mediated by its own definition. Trust between individuals ceases to exist, because it's trust provided you're not betrayed by the middle-man.
Weirdly enough, this is actually a big god damn problem if you actually work for any organization that's going government or security work, because the actual processes of those places tend to be behind whether or not they believe large corporate providers doing things like this are actually doing them well enough, or can be trusted enough, to be a part of the process. So even if you're notionally part of "the system" it doesn't actually make anything easier: in an ideal world open-source security parts would enable COTS systems to be used by defense and government departments with surety because they'd be built from an end-user trust and empowerment perspective.
So even the notional beneficiaries tend to have problems because a security assessment ends up at "well we just have to trust Microsoft not to screw up" and while the Head of the NSA might be able call them up and get access, random state-level government department trying to handle healthcare or traffic data or whatever cannot.
If vendors were plain about it, "attestation" wouldn't be a big deal: you do not own the devices, we do, and you lease it from us, maybe for a one time fee.
But companies know it won't actually fly if your plain about it, ESPECIALLY with large corporations and governments who will outright refuse to buy your services or equipment for many key things if they are not the ultimate controllers of the machines for multiple reasons.
but if I want, I can still create my own arbitrary security requirements and enforce them via software/audits
Try doing that to your bank or whatever other large company you interact with...
Of course, that only works until they start rejecting external TPM chips, and accepting only the built-in "firmware" TPMs found in more recent CPUs.
This is a pretty bad example. The attack vector is rarely, if ever, the technical way the encrypted file is received or where it is decrypted. The attack vector is what happens after it's decrypted. You've given an encrypted file to a computer you've confirmed knows how to decrypt it "securely" (whatever that means). And after that, that clean OS image with all the latest security patches still enables, by design, the decrypted data to be used (read, manipulated, do whatever it is you sent it to them in the first place) and sent someplace else or copied to removable media.
The current landscape of CAPTCHA technology is pretty bleak. It's pretty easy to use ML to learn and solve the early first-gen CAPTCHAs that just used crossed-out words. Google reCAPTCHA relies primarily on user data, obfuscation, and browser fingerprinting to filter out bots, but that only works because of (possibly misplaced) trust in Google. It falls back to an image recognition challenge (which hCaptcha uses exclusively) if you don't have a good data profile - which can also be solved by automated means.
I don't see desktop Linux being fully untrusted off the Internet, if only because Google won't let it happen. They banned Windows workstations internally over a decade ago and they are institutionally reliant upon Linux and macOS. What will almost certainly happen is that Linux will be relegated to forwarding attestation responses between Pluton, some annoying blob in Google Chrome, and any web service that does not want to be flooded with bots in our new hellscape of post-scarcity automation.
But if you're not sure whether the system booted cleanly, then it might be compromised. If it's compromised couldn't your tools simply lie about the codes generated by both the TPM and the Yubikey so that they always match?
You almost certainly use software that calls their server at some point. Hope you will enjoy their vision of security. I'm moving into the woods if they can define how my _personal_ computer behaves.
The governments know this all too well; that's why they've been trying to ban cryptography, and it was (and I believe still is in many cases) classified as a munition.
As defined by the user.
RA doesn't care what software you run. In fact RA is better supported by Linux than any other OS! And, although the discussion in this thread is about RA of entire machines, that's actually pretty old school. Modern RA is all about attesting the tiniest slice of code possible, hence the "enclave" terminology. The surrounding OS and infrastructure doesn't get attested because it can be blinded with encryption. This is beneficial for both sides. I don't actually necessarily care how you configure your OS or even if it's up to date with security patches, if the security model treats the entire OS as an adversary, which is how Intel SGX works. You just attest the code inside the enclave and I send/receive encrypted messages with it.
There's nothing that keeps a medical provider from going old school.
Unless I'm completely overlooking something... It may have snuck in with ACA.
Do you realize how daft and unrealistic your assertion is?
Tell ya what. You get Broadcom, Intel, AMD, Nvidia, etc... to go full transparent, and we'll talk.
And in fact, if your provider is doing ePrescribing, odds are they are contributing to supporting a Monopoly by SureScriots who has cornered the market emwith anti-competitive business practices!
DEA still issues serialized paper prescription pads.
https://www.ftc.gov/news-events/news/press-releases/2019/04/...
Everytime an ePrescription goes over the wire, this one weird company based out of Virginia is likely shotgunning your personal info as collected by PBM's/health insurers between all parties involved, (with the obligatory copy for themselves, probably "anonymized and repackaged for monetizable exposure to research groups), and in the contractual terms requiring that people in the network not make arrangements with anyone else for the service.
As a common victim of the perniciousness of this arrangement. I'm more than familiar with how this nonsense goes.
The manufacturer then signs the public portion of that TPM key, creating the ability for everyone to assert that said key was generated internal to their hardware (and thus couldn't be used by an emulator).
You yourself could also sign the public portion of the TPM key, or even generate a new one and sign it, but that wouldn't affect the perverse incentive generated by the manufacturer's assertion. It would just enable you to assert that you trust the TPM key is internal to the TPM without trusting the manufacturer's records.
We're dealing with something like the dual of software signing here.
If it's just about limiting access, Cloudflare imposes a similar limitation of number of accesses you can have to a website via remote attestation. I think once remote attestation becomes more prevalent, it might become useful in the ad business too, e.g. to prevent you from using ad blockers, or similar things.
The basic guiding principle in force since HIPAA in 1996 is that patients, not providers, control access to their medical records regardless of whether those are stored on paper or in an EHR. If the patient authorizes sharing those records with another healthcare organization then the provider can charge a small fee for that service but they can't introduce additional spurious technical requirements on the receiving system.
In the past you culd use your FOSS client to commicate with your ICQ, AIM, MSN Messenger-using friends. Today, not using the official client will likely get you banned from the network.