Google's anti-rooting regime (SafetyNet) has been painful to experience. I'm not sure what's next with the new Play Integrity API but it's hard to have hope users will see wins here or anywhere.
This is also how OS, app distributors and platforms will ensure that they get their 30%+ cut of all revenue generated using their products, as well.
Similarly, this is how OS providers will ensure that apps built for their platforms can't run on other operating systems. You already can't run SafetyNet-enabled Android apps on other platforms despite Android support existing on Android and Linux.
It would be a conspiracy theory to say they were created by a three letter government agency, but if I was running one of those three letter agencies, this is exactly the kind of company I'd setup and control. People just give them their TLS keys lol
If you use a VPN or just like browsing in privacy mode, it will make your life as difficult as possible by having you fill out multiple captchas. And even then, it will sometimes not let you through.
If you're running a website, please stop using Cloudflare.
Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
If the vendor wants to install some self-built OS that they trust on their computer and not update it for 5 years, that's their business, but I may not want to trust their computer to have access to my personal data.
Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised). This is useful for more than just DRM.
> I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.
However, I'm more worried about the questions the increased deployment of technology will bring, such as will Linux users be doomed to a CAPTCHA onslaught being the untrusted devices, or worse. Important questions that, unless raised, risk us just "going with the flow" until it is way too late.
I understand the mechanics in a "lies to children" way but who exactly is attesting what? Let's face it: MS isn't going to compensate me for a perceived flaw in ... why am I even finishing this sentence?
I recently bought some TPM 2.0 boards for my work VMware hosts so I could switch on secure boot and "attestation" for the OS. They are R630s which have a TPM 1.2 built in but a 2.0 jobbie costs about £16.
I've ticked a box or three on a sheet but I'm not too sure I have significantly enhanced the security of my VMware cluster.
Entities (ab)using remote attestation in order of 'screws over those below them':
Government > Cyber criminal groups > Large organizations > Normal people.
Do you want to live in a world where a large corp can dictate which $VERSION of $APPROVED_SOFTWARE you should be running? I think fundamentally it's just not the direction we should be going. I don't actually doubt that proper remote attestation eventually would be possible, but before then it will be possible to bypass it in countless ways. Probably eventually you'd end up with only a single software stack, assumed to be flawlessly secure.
I think, luckily, this will severely limit the usability of the technology that can work in this way. Developing for this stack will be a pain, the machine will have all sorts of super annoying limitations: can't use that display the driver is not vetted, can't use that USB webcam it might have DMA, etc. That will hopefully harm the uptake of such technologies.
Like often in tech remote attestation in your case is a technical fix for a social problem. If the problem is sharing sensitive data with institutions you don't trust then you need to build that trust, or transform the institutions so that they can be trusted. Transparency, laws, oversight, that type of stuff.
No.
Contrarily unpopular opinion: You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish. They may tell you what they will and will not do, but it is entirely on you to trust them.
...and that's the peril of things like remote attestation and other "zero trust" crap. They replace the nuanced meaning of trust that holds society together (and has literally done so since the beginning of life) with absolutes enforced by an unrelenting machine controlled by some faceless bureaucracy which is also partly under the command of the government. There should already be enough dystopian sci-fi to convince everyone why that is a really bad idea.
We cannot do this now because user-hostile vendors have locked the functionality away from us. I think this is a perfect microcosm for the whole tradeoff: lock away the rest of your userspace and we might let you watch the new Batman DVD you already bought.
If you somehow try to work around and use your "attested" machine and user id to do this (because websites will require it and your script can't have it, but may be it can run under your user account, for example) - monitoring systems will soon block your account for "suspicious activity" and it will be next to impossible to re-instate because Google and Microsoft don't provide any human support, unless you are some 1mm+ influencer on instagram and will manage to start a rukus on social media.
The outlook is quite bleak :-(
We've already seen shades of this in banking. After chips were added to credit cards, people started having their chargebacks denied because "our records show the card was physically present" (even if the charge originated in another country)
How long until companies try to deny responsibility for data leaks because "our records show Windows was fully up-to-date and secure"
We would benefit from a better public discussion of what "security" encompasses. Else, we risk conflating "what MS wants me to do with my computer" with "preventing hackers from stealing my credit card number".
Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.
You can mandate whatever remote attestation you want, and they'll follow whatever security practices they damn well feel like and you can't do a damn thing about it. So, you've given up your ability to run software that doesn't spy on you, and they're operating business as usual because they don't have a single goddamn reason to care what you think remote attestation mean in the real world.
This is way more than just about not watching movies in 4k that you could also pirate. This is about turning people who don't have "trusted computing" devices that track every behaviour of theirs into societal outcasts.
Let's say I'd like mandatory disclosure on shenanigans like that, so I can avoid this healthcare provider.
Yeah, because transferring that data into another machine is an impossible task.
That's the stupidest argument I heard today...
Yes, dear Windows, you're running on a dual-core Xeon Gold 6326 with i440BX chipset. Don't ask how this is possible, just trust me...
That's a classic "road to hell paved with good intentions". The approaching reality is more like:
Imagine a world where to be allowed to use the Internet you will be mandated to run certain software, which reports your personal information to a company you are obligated to use, and whose use of that information is absolutely something you do not want.
Yes, the problem is indeed "who's using it". Unfortunately you aren't going to be able to decide either, and it will certainly be used against you.
if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
Am I wrong about the effectiveness of this? I'll readily admit I don't understand most of the underlying tech here.
Of course not. Things happen based on what investors and developers want. Users are very much secondary. They're a nuisance. If things did really happen based on some user-base need, would we have had Instagram or Facebook in their current form?
as defined by whom? Some government (which one) organization ?
This will end up making everything more ossified and less secure.
But also once that is in place, various organizations and goverments will be able to force you to use whatever spyware they want, in order for your attestation to go through.
That’s a huge caveat.
You also cannot verify your trust is deserved, and that it will continue to be deserved, because such a system by its very nature must be opaque to untrusted parties (which means you).
Good luck finding a provider that doesn't ship your sensitive medical data out to an EMR company though.
Whenever I see the "one more step" crap, I just close that tab.
Cloudflare needs to stop existing, and it needs to do so yesterday.
Quick edit to answer my own question: In my home state paper prescriptions are only legal in a few situations (if it's for an animal, glasses, justifiable emergencies). However in some parts of the country they're still possible. Even if I had a choice, I prefer the convenience of sending the data digitally- once you actual fill the paper prescription CVS or whoever is still gonna be able to glean sensitive medical info, so you're just delaying the inevitable.
Partially. For online attestation you'd be missing the most important part. The vendor signed keypair that is insanely hard to extract from the device.
No, you can attest to a completely open source system. Nobody's actually doing that, but it's possible. The private keys have to be secret and non-extractable, but that's it.
No, because this still doesn't mean my data is secure. A human can still go into the third party vendor's system and see my data, and if that human wants to steal it, they can. No amount of remote attestation will prevent that.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines
Oh, really? So every time my health care provider wants to send my data to a third party, a remote attestation confirmation box will pop up on my phone so I can say yes or no, or ask more questions about the third party vendor's computers?
Ultimately the problem here is trust, and trust is a social problem, and as the saying goes, you can't use technology to solve a social problem. But you sure can pretend to be using technology to "solve" a problem in order to get people to give up more and more control over their devices.
This is why consumer protection laws are more important than any technical means of financial security. Having a super duper hardware wallet to store your cryptocurrency doesn't negate the irreversible nature of transactions.
Raw data is even harder to secure than money. Money in a banking system can be clawed back or frozen. Data can't be un-leaked.
Caveat is that security only extends into the kernel image, so for my use case I embed the initrd in the kernel image and have all the filesystems and swap on a dm-crypt volume.
I also have to unseal and reseal when performing upgrades of the initramfs and above, but I'm fine with that.
But in a future world it's not hard to imagine the vendor software running in some sort of SGX-like environment that is very difficult to manually extract the data from.
I trust myself more than I trust anyone or anything else. It's as simple as that. I don't even slightly trust Microsoft, Google, or Apple.
Your logic is built on an invalid premise that these companies can, in fact, be trusted.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised).
This is exactly what I want to avoid. It's my device. It should only ever serve me, not anyone else, including its manufacturer and/or OS developer. It should not execute a single instruction that isn't in service of helping me achieve something.
Also, the concept of ownership can simply not be applied to something that does not obey the physical conservation law, i.e. can be copied perfectly and indefinitely.
Ask that question every time you see the word "security" written. There is no such word as bare security.
- security for who?
- security from who?
- security to what ends?
Much of the time security is a closed system, fixed-sum game. My security means your loss of it.
Even if you run your own proxy and caching, you can’t trust your cloud provider not to DMA your keys unless you’re using trusted computing[0] (which ironically requires remote attestation if a company wants to verify it’s active on their CPU), and then chances are a dedicated three-letter-agency has exploits at the ready if they really need to extract information.
If a company isn’t running their own bare metal, nothing is safe.
0: https://aws.amazon.com/blogs/security/confidential-computing...
Good luck getting your x86-64 windows kernel + chrome JavaScript exploit chain to run on my big endian arm 64 running Linux and Firefox.
(Also, the existence of competition like that regularly forces all the alternatives to improve.)
I use VPN and private browsing and the worst I've been subjected to is getting IP/ASN blocked, which to be fair can be implemented without cloudflare. I've had to fill out captchas but that's something that happens a few times a month at most, and it's never a captcha loop that you mentioned.
I'd prefer it to not be run a computer which has already been compromised with a UEFI rootkit which is what trusted computing has gotten us so far.
It's totally fine if it's used to empower and protect us, normal people. If we can use this with our own keys to cryptographically prove that our own systems haven't been tampered with, it's not evil, it's amazing technology that empowers us.
What we really don't need is billion dollar corporations using cryptography to enforce their rule over their little extra-legal digital fiefdoms where they own users and sell access to them to other corporations or create artificial scarcity out of infinite bits. Such things should be straight up illegal. I don't care how much money it costs them, they shouldn't be able to do it.
The problem is we have chip makers like Intel and AMD catering to the billion dollar corporation's use case instead of ours. They come up with technology like IME and SGX. They sell us chips that are essentially factory-pwned by mega-corps. My own computer will not obey me if it's not in some copyright owner's interest to do so and I can't override their control due to their own cryptographic roadblocks. Putting up these roadblocks to user freedom should be illegal.
While it might be theoretically possible to assert the whole network is up to date, the hospitals will definitely fail that check. There's all sorts of equipment the hospitals can't update for various reasons, such as aging radiology machines.
It's much, much worse with mobile devices. You can re-lock the bootloader on a Pixel with your custom key, but you still can't touch TrustZone and you'll still get a warning on boot that it's not running an "official" OS build.
Completely agree. These outdated notions of information ownership are destroying free computing as we know it. Everything "hacker" stands for is completely antithetical to such notions.
I've read once about the hardware tricks DRM dongles use in the silicon itself. Doesn't sound like a 40 job :^)
That being said, extending it to everyone in a way that curtails individual control of computing devices creates an environment that is dangerous in many ways. I don't want to be in a world where only "approved" software is allowed on my computer or something. This can get wrong really quickly, and a lot of the application of attestation technology for consumers is really just about removing their freedoms.
The place where the government should step in IMO is not to ban CPU vendors from implementing this, but to pass anti-discrimination laws, so ban companies from requiring remote attestation to unlock some specific feature. They should maybe endorse it, or be allowed to warn you, but they should still allow full access regardless.
For the B2B setting there are obvious dangers of monopoly abuse, here the government just needs to enforce existing laws. Microsoft dropping the requirement that the signing key for third parties has to be trusted is IMO a major antitrust violation.
I wish that were true. However, I think the movie Tron (1982) sums this up very nicely.
From the movie Tron:
> Dr. Walter Gibbs: That MCP, that's half our problem right there.
> Ed Dillinger: The MCP is the most efficient way of handling what we do! I can't sit here and worry about every little user request that comes in!
> Dr. Walter Gibbs: User requests are what computers are for!
> Ed Dillinger: Doing our business is what computers are for.
We are now moving toward a world where all computers have a "MCP". No, it is not to solve user problems, it is to do the business of the corporations that designed it.
I have a rooted Android phone and I had to spend effort spoofing attestation in order to even launch some of my games which don't even have multiplayer. Allow me to be the one to tell you that I do not appreciate it.
I don't even care enough to cheat at these games but if I wanted to cheat it would be merely an exercise of my computer freedom which nobody has any business denying.
> On-premise, open-source, customer-owned remote attestation servers are possible. Avoid outsourcing integrity verification to 3rd-party clouds.
With owner-operated OSS MDM & attestation servers, PCs can have diverse, owner-customized OS and configs, reducing monoculture binary blobs.
Get the government to regulate the corporations requiring it. Classify any attestation requirement as discrimination or something. They're excluding people without good reason.
We need to bring the giants down to a level playing field and stop this nonsense.
This is not something that is amenable to market-based solutions, as the market contains bad actors pushing for infringement of my ability to run software.
I'd rather be able to access it without google or microsoft sticking their nose in.
I'd rather be able to combine it with my other data in whatever ways I see fit.
I'd rather be able to back it up in whatever way I see fit.
I'd rather be able to open it on a device that doesn't have a backdoor provided by the US government.
Because it's not microsoft'sor qualcomm's data, it's mine.
Who needs espionage or lobbying when you have an undetectable root shell on every computer in the country?
This choice of words perfectly captures the arrogance of these copyright corporations. Who are they to dictate how our computers work just to maintain their irrelevant business model? They're ones who should be playing by our rules, not the other way around. It makes me wish piracy was as bad as they make it out to be, to the point it kills them.
We need this in our corporate client device fleet to counter specific threats. We need this in our servers for the same reason — we do remote attestation today for Linux servers in semi-trusted locations. We’ve conveyed to our vendors that this is a desired capability in next-gen network equipment.
We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.
We’re not Intel/AMD/NVIDIA/etc’s largest customer, but when we defer orders or shift vendor allocation it gets mentioned in their quarterly earnings reports. They tend to listen when we ask for features, and when our peer companies (not to mention governments) ask for the same thing because we have similar data security requirements?
Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
outlaw any use of methods to which clients are discriminated against, including using remote attestation to do so. Similar language has been used in the DMCA legislation to similar effects (aka, software circumvention).
...which won't be able to interact with any of the walled gardens which will be enabled by these same technologies.
Walled gardens care about including their large customers, so it's not as simple as locking them out. There is also an ongoing EU legislative effort to mandate digital platform interoperability, which will likely apply to attestation.
While there is a need for complex criteria, there is also a need for something simple to base enforcement on.
The simple criteria should be something like having more than some number of customers/users gets you automatic scrutiny and forces you to have things like customer service people along with government required metrics (10K-type things) and some larger number of customers/users forces you to break up.
It's the uncontrolled aggregations of users and data that are the problem.
Great...
Just now that we have Proton, Vulkan and other technologies allowing us to seamless run an incredible number of Windows games, we are going backwards and locking down again. That's even assuming we'll be able to run other operating systems. Because why stop at remote attestation? Force secure boot because, hey, noone is using other operating systems anymore since all cloud providers are requiring remote attestation.
The only reason such hardware is secure is because the resources required to hack it are large.
Basically, a sane system would be: two parties exchange their own TPM keys which they generated on device themselves. They agree to a common set of measurements they will use with their TPMs to determine if they believe the systems are running normally. They then exchange data.
What's happening instead: a large company uses its market position to bake in it's own security keys, which the user can't access or change. They then use their market position to demand your system be configured a specific way that they control. Everyone else suborns to them because they're a big player and manufacturing TPMs is complicated. They have full control of the process.
The essential difference is that rather then two individuals establishing trust, and agreeing to protocols for it - secured with the aid of technology, instead one larger party seizes control by coercion, pretends it'll never do wrong, and allows people to "trust" each other as mediated by its own definition. Trust between individuals ceases to exist, because it's trust provided you're not betrayed by the middle-man.
Weirdly enough, this is actually a big god damn problem if you actually work for any organization that's going government or security work, because the actual processes of those places tend to be behind whether or not they believe large corporate providers doing things like this are actually doing them well enough, or can be trusted enough, to be a part of the process. So even if you're notionally part of "the system" it doesn't actually make anything easier: in an ideal world open-source security parts would enable COTS systems to be used by defense and government departments with surety because they'd be built from an end-user trust and empowerment perspective.
So even the notional beneficiaries tend to have problems because a security assessment ends up at "well we just have to trust Microsoft not to screw up" and while the Head of the NSA might be able call them up and get access, random state-level government department trying to handle healthcare or traffic data or whatever cannot.
The same insane regulations that were probably the result of corporate lobbying are now the excuse for these hostile features? WTF?
They just made it illegal to forbid sideloading for example. On its face it’s going to be illegal for Apple/Android to use attestation to lock down their devices further, indeed they are now legally required to open up in the EU.
Maybe they would go in one direction in the US and the opposite outside? Seems unlikely to me though…
> Maybe they would go in one direction in the US and the opposite outside? Seems unlikely to me though…
Companies seem to have no problem conforming to particular legal regimes all over the world and not importing/exporting them elsewhere. I believe companies will try everything in their power to protect their cash cows in the US for as long as they possibly can.
If vendors were plain about it, "attestation" wouldn't be a big deal: you do not own the devices, we do, and you lease it from us, maybe for a one time fee.
But companies know it won't actually fly if your plain about it, ESPECIALLY with large corporations and governments who will outright refuse to buy your services or equipment for many key things if they are not the ultimate controllers of the machines for multiple reasons.
Why? Part of what you said is something I already believe in. I think it's only a matter of time before the international network we enjoyed in the early days of the internet is destroyed by governments and their eternal struggle to impose their own incompatible laws on it. One day the internet will fracture into several regional networks with well delimited and filtered borders.
I for one am glad to have known the internet. It was truly one of the most wonderful creations of humanity.
So in your case, for devices you buy, you set up your corporate TPM key as the root owner, and then you send the device to employees, vendors, etc. The ownership chain is clear and you can send attestation requests. The corp is the owner of the device, and that is fairly obvious.
The issue is when people and corps buy devices, they do not have effective root. Microsoft, apple, google, etc have the tpm root key, and you as a corporation actually do not have root yourself. They can force you to do things you want to do. It makes you more vulnerable, because if it is in MSFTs interest (or they are coerced by the state to do so clandestinely) a lot of threats can happen, and you don't even need an 0day to do so!
If it starts becoming status quo, the freedom to do the things you need to your devices starts going away.
but if I want, I can still create my own arbitrary security requirements and enforce them via software/audits
Try doing that to your bank or whatever other large company you interact with...
Then they should prove it. I'm sure they have lots of expensive lobbyists under their employ, have them go to the government and tell the politicians the computer industry needs regulation to make it illegal to screw over users by depriving them of their computer freedom. If effective rules make it into law, I will trust their intentions.
> are now the excuse for these hostile features
These features may be hostile if you don't control your own root of trust or if your vendor burns fuses prior to selling a device to you. If you were expecting otherwise, in that context they sold you a defective product.
Those same features are beneficial if you run your own root of trust. They help maintain control over your devices and increase confidence that they have not been coopted by your adversaries.
> - security for who?
Riot Games
> - security from who?
The users of their software.
> - security to what ends?
Ensuring a device (A) is running windows (B) is running unmodified Windows system files (C) a rootkit that replaces syscall behavior isn't installed
All of this is an effort to prevent cheats that wallhack/aimbot or otherwise give the player an unfair advantage - at least, it ensures the cheats aren't loaded early enough to where their anti-cheat is unable to detect their influence on the game process.
While i say 'Riot Games' is who benefits, it's all at the request of their users; you can search for 'hacker' or 'cheats' on r/leagueoflegends and see tons of posts from years ago complaining about cheaters scripting (automatically using abilities in the best possible way) and gaining an unfair advantage against them. Every posts' comments will boil down to "Riot really should figure out how to stop these cheaters". It's a cat-and-mouse game, but it'll be a lot easier to catch the mouse once they can safely enable the remote attestation requirement and only lose 0.1% of their players.
On the less moral side, this can also be applied to single-player games to reduce the chances of a game's anti-piracy protections being cracked.
Totally!
The answer isn't to ban the tech, it's to ensure that everyone can set their own root if they so choose.
Want your system to only load firmware that's been signed by Debian to ensure that unfree blobs aren't sneaking in? Great! That's basically what we want too (s/Debian/our own supply chain integrity folks/g but same concept).
Of course, that only works until they start rejecting external TPM chips, and accepting only the built-in "firmware" TPMs found in more recent CPUs.
This is a pretty bad example. The attack vector is rarely, if ever, the technical way the encrypted file is received or where it is decrypted. The attack vector is what happens after it's decrypted. You've given an encrypted file to a computer you've confirmed knows how to decrypt it "securely" (whatever that means). And after that, that clean OS image with all the latest security patches still enables, by design, the decrypted data to be used (read, manipulated, do whatever it is you sent it to them in the first place) and sent someplace else or copied to removable media.
Originally, the ones who "needed" features like this this are the big content distributors. Without these features, it's too easy for normal people to extract content and give copies of it to their friends and family.
As a parallel development, another one who "needed" features like this is Microsoft, for a different reason. They were taking reputational damage from malware, and needed a way to prevent malware from running before their operating system kernel (malware loading after the operating system kernel could be contained by the normal security mechanisms like ACLs).
These two development threads had enough in common that they ended up merging together, and those who want to prevent copying content can now point to security as an excuse. And yes, neither of these two groups care if you can't run Linux on your own devices.
> if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
I fully agree that this is our best defense. In fact, the only reason we can still run Linux on our desktops and notebooks is that, when SecureBoot was developed, Linux was already important enough. However, this could only happen because Linux had time to grow and become important enough (while being a "secondary priority user" of the hardware) before things started to become limited. Had SecureBoot come before Linux became important enough, running third party operating systems would not have been allowed, and Linux would not have had a chance to grow and gain importance.
The current landscape of CAPTCHA technology is pretty bleak. It's pretty easy to use ML to learn and solve the early first-gen CAPTCHAs that just used crossed-out words. Google reCAPTCHA relies primarily on user data, obfuscation, and browser fingerprinting to filter out bots, but that only works because of (possibly misplaced) trust in Google. It falls back to an image recognition challenge (which hCaptcha uses exclusively) if you don't have a good data profile - which can also be solved by automated means.
I don't see desktop Linux being fully untrusted off the Internet, if only because Google won't let it happen. They banned Windows workstations internally over a decade ago and they are institutionally reliant upon Linux and macOS. What will almost certainly happen is that Linux will be relegated to forwarding attestation responses between Pluton, some annoying blob in Google Chrome, and any web service that does not want to be flooded with bots in our new hellscape of post-scarcity automation.
Attestation can also be entirely local, e.g. between a device and a USB key with OSS software that is configured by the owner.
RA is a shortcut companies are taking to market for privacy and security while not mentioning lock-in, network effects and DRM. When pressed, they will ultimately still say privacy and security aren't 100%.
That said, could a Linux distro come out with the same thing as these PATs using IMA/EVM/TPM/ME/PSP? Probably (Graphene has some support for RA, but I haven't looked in depth), and as long as Cloudflare had multi-platform support, and the implementation still allows letting me have control, I wouldn't see the problem. Bugs can be exploited by anyone with the know how, and I see no reason why I should give up control because companies want to take a shortcut instead of designing proper systems.
We see Epic has already said they won't support Linux because of anti-cheat, so that scapegoat exists, but there are other games that didn't go that route! Porting costs, and size of install base are real excuses among others, but anti-cheat is a BS excuse. No shortcuts by companies and no short cuts by platforms, but we know lock-in, network effects and DRM are too good to let go.
Just to be clear: I hate how movies are currently distributed.
Can you please expand on what you verify via remote attestation and against which attack vectors this protects you?
Does this protect you against the usual attack vectors of your employees logging in on phishing sites, downloading malware, running office macros etc? Stealing your data usually does not need any root/kernel access.
But if you're not sure whether the system booted cleanly, then it might be compromised. If it's compromised couldn't your tools simply lie about the codes generated by both the TPM and the Yubikey so that they always match?
It's like putting a camera network and automated tranq drones in every playground so kids don't play tag 'wrong'.
This insanity of trying to conflate complete submission to a third party with trust or security when in reality it provides neither because that party is an adversary is a society-wide mental illness.
If this were true, how would the malware ever get itself to the point where it is loaded before the kernel is?
You almost certainly use software that calls their server at some point. Hope you will enjoy their vision of security. I'm moving into the woods if they can define how my _personal_ computer behaves.
I feel like part of the problem is that Remote Attestation providers get to have their cake and eat it too: they make a theme park, set up boundaries, and charge admission under the premise that it's safer to play in their walled garden than in a public park.
But if a bad actor slips through their gate and picks a few pockets or kidnaps a couple children, the operators get to say "not our problem, our services have no warranty -- read the EULA".
I feel like in the real world, if a park operator explicitly bills itself as "a safe place to play" it's their problem if someone goes on a crime spree on their property -- there is some duty to deliver on the advertised safety promise.
But somehow, in the software world people can control admission, control what you do and somehow have no liability if things still go off the rails. It's just a sucker's game.
Of course, I'd rather not see remote attestation happen, but maybe part of the reason it keeps creeping back is exactly because there is zero legal downside to making security promises that can't be kept, but incredible market advantages if they can sucker enough people to believe in the scheme.
the normal way to do this is to run your static content through CDN's and allow your dynamic content to hit origin.
you're not saved from DDoS of course, but you'd be surprised at how much cookies for static content can cost you in CDN costs; usually people use a separate domain.
I play some games like Valorant which use Ring 0 anti-cheat mechanisms, and to do this I have a Corsair i300 which I bought basically exclusively for FPS, flight simulators, and other games that I enjoy. I'm actually equally unhappy with corporate-provided Mobile Device Management and "Endpoint Protection" technologies being on personally-owned devices, but one clear solution is to just physically partition your devices by purpose and by what restrictions you're willing to tolerate on them. "But I can't do what I want with the hardware that I own" is a bit of a misnomer, you can, you just might not also have the right to participate in some communities (those that have 'entry requirements' which you no longer meet if you won't install their anti-cheat mechanisms).
Why tolerate Riot Games, why not "play games with a community that has accountability"? It's simple for me: in the extremely limited free time that I have for this activity, my objective is to click <PLAY> and quickly get into a game where my opponents are 'well balanced' (matched against my own abilities) and servers which are not infested with cheaters.
Without any question in my mind, cheaters utterly ruin online multiplayer games, Team Fortress 2 has been a haven of bots and cheats for several years and Valve is only recently starting to take steps to address.
I have exactly zero desire to spend time "locating communities with accountability". I want a matchmaking system provided by Riot Games which simply doesn't tolerate cheating, period. I'm willing to be in that community even with its 'entry requirements'. You may not be willing to submit to those entry requirements and that's okay. You should advocate that games support your desire to launch without anti-cheat protections, and restrict you to playing on 'Untrusted Servers' outside the first-party matchmaking community, where you will enjoy no anti-cheat protection, and you can gather freely with your own "communities with accountability".
The governments know this all too well; that's why they've been trying to ban cryptography, and it was (and I believe still is in many cases) classified as a munition.
Informed consent requires the consenter have understanding of what is happening, know what the implications are and agree. Riot games anticheat software doesn'tpass the first two, and is largely irrelevant to the conversation because this use case is a trojan horse anyway.
Community and social graph is a finite resource. I can't just go get another one if you colonise mine.
This is exactly the same argument libertarians have against food safety and labelling regulations. I can't go get baby formula without melamine in it if every brand has it because they price dumped to bankrupt the competition and I don't have a chemistry lab to test for it.
I can't go find another bank if they all switch to requiring attestation. I can't go buy another government. I can't go find a new social graph if everyone on it is on facebook.
Operating systems and CPUs are utilities with natural monopolies, as is communication software. Treating an ecosystem, a community, and a social graph as a fungible good is a blatant lie.
It's not an assumption. It's the reality of the information age. The only way to have complete control over information is to not publish it. Once data's out there, there's virtually no way to control what will be done with it. Creators started from a lost position: they want to publish their works and yet they want to somehow control what happens to "their" data. The level of tyranny necessary to accomplish such an end requires the destruction of free computing as we know it. We're already seeing shades of it today: computers that aren't really ours, they only do some corporation or government's bidding. It's only a step away from such digital copyright enforcement nonsense to far more serious matters like cryptography regulation.
So I'm not the one assuming anything. It's creators who live under this notion that they own their creations. The truth is public domain is the default state. Intellectual property laws are responsible for bending reality and introducing this assumption that you can even own ideas to begin with. That was workable in the age of printing presses but not in the 21st century where everyone has multiple powerful globally networked computers at home. I for one think computers are a far more important innovation than almost everything humanity has ever created and I don't think enabling creators to continue living under such illusions is important enough to cripple the potential of computers. I want society to eventually reach a post scarcity state in the real world, mirroring the digital world. I don't want corporations creating artificial economies where there are none.
All creations are just data, and data is just bits, and bits are just numbers in base two. All intellectual work comes down to humanity discovering a really big unique number. How is it even sane to claim ownership over such a thing?
Earlier this spring, Easyanticheat crashed the Windows 11 Insider kernel and a good deal of games were unplayable for weeks.
The concept of remote attestation isn't somehow safer if it works perfectly, and it isn't clear to me that this is actually impossible to build (within an acceptable and specified liability constraint) as opposed to merely exceedingly difficult. I do relish the schadenfreude, though ;P.
> Of course, I'd rather not see remote attestation happen...
Interestingly, the CEO of MobileCoin told me earlier this year that they were "going deeper on discussions with [you] to design a fully open source enclave specifically for [their] use case" (which, for anyone who doesn't know much about this, currently relies on remote attestation and encrypted RAM from Intel SGX to allow mobile devices to offload privacy-sensitive computations and database lookups to their server). I wrote a long letter to you a few days later in the hope of (after verifying with you whether that was even true or not) convincing you to stop, but then decided I should probably try to talk to Kyle and/or Cory first on my way to you (and even later ended up deciding I was stressed out about too many things at the time to deal with it)... does this mean you actually aren't, and we are all safe? ;P (I guess it could be the case that this special design somehow doesn't involve any form of remote attestation--as while my core issue with their product is their reliance on such, I went back through the entire argument and I didn't use that term with THEM--in which case I'm very curious how that could actually work.)
The premise of personal computing is that my computer works as my agent. For any remote party that I'm interacting with - their sphere of influence ends at the demarcation point of the protocol that we interact with. Attempts to dictate what software my computer can run when interacting with them are unjust, and ultimately computationally disenfranchising. Despite the naive references littered throughout this thread to users being able to verify what software companies are running, it will never work out that way because what remote attestation does is magnify existing power relationships. This is why so many people are trying to fall back to usual the crutch of "Exit" as if going somewhere else could possibly tame the power imbalances.
Practically what will happen is that, for example, online banks (and then web stores, and so on) will demand that you only can use locked down Apple/Windows to do your online banking. This will progress somewhat evenly with all businesses in a sector, because the amount of people not already using proprietary operating systems for their desktop is vanishingly small. Which will destroy your ability to use your regular desktop/laptop with your regular uniformly-administered OS, your nice window manager, your browser tweaks to deal with the annoying bits of their site, your automation scripts to make your life easier etc. Instead you'll be stuck manually driving the proprietary Web TV experience, while they continue to use computers to create endless complexity to decommodify their offerings - computational disenfranchisement.
I'll admit that you might find this argument kind of hollow with respect to games, where you do have a desire to computationally disenfranchise all the other players so it's really a person-on-person game. But applying these niche standards of gaming as a justification for a technology that will warp the entire industry is a terrible idea.
RA is a use-case neutral hardware feature, so it doesn't really make sense to talk about making providers liable for anything. That's an argument for making CPU manufacturers liable for anything that goes wrong with any use of a computer.
The sort of companies that use RA are already exposed to losses if RA breaks, that's why they invest in it to start with. Console makers lose money if cheating is rampant on their platforms for example, because people will stop playing games when they realize they can't win without cheating.
So what you're saying is, let's incentivize these already incentivized people to use RA even more, and moreover, let's strongly incentivize companies that don't use it to start doing so. Because if you think governments will say "oh, you didn't use the best available tech to protect the kids, fair enough no liability" then you're not very experienced with how governments work! They will say "you should have used RA like your competitors, 10x the fine".
That world already exists, it just doesn't get used much. You can do this with Intel SGX and AMD SEV.
The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).
RA gets demonized by people making the arguments found in the sibling comment, but they end up throwing the baby out with the bathwater. There are tons of privacy, control and decentralization problems that look intractable until you throw RA in the mix, then suddenly solving them becomes easy. Instead of needing teams of cryptographers to invent ad-hoc and app specific protocols for every app (which in reality they never do), you write a client that RAs the server to check that it's running software that won't leak your private information as part of the connect sequence.
As defined by the user.
RA doesn't care what software you run. In fact RA is better supported by Linux than any other OS! And, although the discussion in this thread is about RA of entire machines, that's actually pretty old school. Modern RA is all about attesting the tiniest slice of code possible, hence the "enclave" terminology. The surrounding OS and infrastructure doesn't get attested because it can be blinded with encryption. This is beneficial for both sides. I don't actually necessarily care how you configure your OS or even if it's up to date with security patches, if the security model treats the entire OS as an adversary, which is how Intel SGX works. You just attest the code inside the enclave and I send/receive encrypted messages with it.
A company, an organization or an individual can have security guards, security procedures, etc. Security can protect the organization from objectively malicious threats, but security can also mean protection from any real or perceived threat to someone's interests.
Security can also protect an organization from the leakage of embarrassing or potentially incriminating information. An authoritarian regime has security to prevent it from being challenged. Security guards at an industry might stop activists from getting to the grounds to gather evidence of harm to the environment or people. Indeed, security staff would stop unauthorized people regardless of those people's intentions.
All of those are examples of security even if other people's legitimate interests were in conflict with it.
Security is for someone, and from someone or something.
Any cheater will probably still do really well against another cheater while a human won’t have a chance. I think this is kind of like shadow banning?
There's nothing that keeps a medical provider from going old school.
Unless I'm completely overlooking something... It may have snuck in with ACA.
About that, I imagine the millisecond that you can validate using remote attestation that a client has no adblockers, Cloudflare will add a remote attestation "gateway" (like the one they have now with the captcha) that will, overnight, give every Cloudflare customer (so half of the internet) the ability to block users that may have adblockers.
It's simply too juicy of a service for these people.
It's also interesting to see how the game of "telephone" works out when the message comes full circle. Mobilecoin did reach out to me, initially to see if I would write a whitepaper on SGX. After I told them I would be frank about all my opinions, the conversation pivoted to "well, if you could make something that fixed this problem what would it be?". Which I entertained by saying I think the problem may not be solvable, but whatever it was, it had to be open source; and "oh by the way let me tell you about my latest projects, perhaps I could interest you in those". To which it trailed off with a "I'll have my people call your people" and that was that, modulo a podcast I did for them about a month ago which surprisingly didn’t touch on SGX.
So: long story short, no, I'm not creating a solution for them, and I think remote attestation is both a bad idea and not practical. Is it worse than burning some hundreds of tera-watt hours of power per annum to secure a cryptocurrency? That is a harder question to answer: is climate change a bigger problem than remote attestation? The answer is probably obvious to anyone who reads that question, but no two people will agree on what it is.
To your point on RA being not impossible but possibly just exceedingly difficult – you might be right. My take on it is that remote attestation is only "transiently feasible": you can create a system that is unbreakable with the known techniques today; but the very "unbreakability" of such a scheme would cause ever more valuable secrets to be put in such devices, which eventually promotes sufficient investment to uncover an as of yet unknown technique that, once again, breaks the whole model.
Which is why I’m calling out the legal angle, because the next step in the playbook of the corps currently pushing RA is to break that cycle -- by lobbying to make it unlawful to break their attestation toys. Yet, somehow, they still carry no liability themselves for the fact that their toys never worked in the first place. I feel like if they actually bore a consequence for selling technology that was broken, they’d stop trying to peddle it. However, if they can get enough of society to buy into their lie, they’ll have the votes they need to change the laws so that people like you and me could bear the penalty of their failure. With that strategy, they get to decide when the music stops – as well as where they sit.
I'd like to see a return to sanity. Security is fundamentally a problem of dealing with people acting as humans, not of ciphers and code. Technology tends to only delay the manifestation of malintent, while doing little to address the root cause, or worse yet -- hiding the root cause.
Do you realize how daft and unrealistic your assertion is?
Tell ya what. You get Broadcom, Intel, AMD, Nvidia, etc... to go full transparent, and we'll talk.
And in fact, if your provider is doing ePrescribing, odds are they are contributing to supporting a Monopoly by SureScriots who has cornered the market emwith anti-competitive business practices!
DEA still issues serialized paper prescription pads.
https://www.ftc.gov/news-events/news/press-releases/2019/04/...
Everytime an ePrescription goes over the wire, this one weird company based out of Virginia is likely shotgunning your personal info as collected by PBM's/health insurers between all parties involved, (with the obligatory copy for themselves, probably "anonymized and repackaged for monetizable exposure to research groups), and in the contractual terms requiring that people in the network not make arrangements with anyone else for the service.
As a common victim of the perniciousness of this arrangement. I'm more than familiar with how this nonsense goes.
The manufacturer then signs the public portion of that TPM key, creating the ability for everyone to assert that said key was generated internal to their hardware (and thus couldn't be used by an emulator).
You yourself could also sign the public portion of the TPM key, or even generate a new one and sign it, but that wouldn't affect the perverse incentive generated by the manufacturer's assertion. It would just enable you to assert that you trust the TPM key is internal to the TPM without trusting the manufacturer's records.
We're dealing with something like the dual of software signing here.
Ah, yeah: I was really tired when I wrote that last night and the sentence complexity was brutal ;P. I wrote the letter, but it felt weird to send "out of the blue" as we don't ever actually talk; and I wasn't even sure I could trust that anything was going on at all, but had written this sad sad letter (lol) and I was just like "I shouldn't send this; maybe I should first have a meeting with Kyle about it, and maybe Kyle can decide how to approach you", and then I managed to overthink it so hard that I just gave up because I was dealing with something else (and I even wasn't sure if Cory, who also started to get injected into my overly-complex strategy, would agree with me, which made it seem even more difficult).
> [everything else you said]
<3
Never mind that you can't really put a dollar value on personal information to substantiate damages or even personal time spent dealing with the fallout from someone else's negligence, which is like one of the fundamental problems with our legal system.
(There's also the elephant in the room that one of the main industries clamoring for ever more "security" still continues to insist that widely-published numbers (ssn/acct/etc) are somehow secret.)
>Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
Except... Yes it is. When your Ur business case was "do computation on someone else's computer, but ensure the operator cannot have full transparent access to their own computer's operational details", you are in the end casting the first stone. Just because I don't have an LLC, or Inc. or other legal fiction tied to my name, doesn't mean I'm not bound by the same moral imperatives you claim to be, but more importantly (I am not willing to sell everyone else's computational freedom up the river for a pole of quick bucks).
Get your collective heads out of your arses. Get back out in the sun. This nonsense is ripping every last bit of potential computing ever had and ripping it out of the hands of the lay consumer unless they dance the blessed dance of upstream.
You do not know best. You think you do. You've gotten where you are without that which you seek to create, and once created, that which you make can and will never be taken back. It creates and magnifies too much power asymmetry.
My god, have you never really stopped to think through the ethical implications? To really walk down the garden path?
Do you not understand how insane that prospect is?
Only if by "entire point of capitalism", you mean the philosophical paradigm that highly centralizing corporations market to gain more power and ultimately undermine the distributed sine qua non of capitalism.
> Saying otherwise is effectively suggesting that companies be forced to make product in a certain way to accommodate your requests.
You're missing market inefficiency and the development of Schelling points based on the incentive for uniformity. In this case specifically, the inability of a company to investigate what I am running on my computer creates the concept of protocols, and keeps each party on a more even footing. Remote attestation changes that dynamic, undermining the Schelling point of protocols and replacing them with take-it-or-leave-it authoritarianism extending further into our lives.
> Gabriel Sieben is a 20-year-old software developer from St. Paul, MN, who enjoys experimenting with computers and loves to share his various technology-related projects. He owns and runs this blog, and is a traditional Catholic.
Well isn't that something, someone who writes blog posts proselytizing the importance of individual freedoms while also making sure his readers know he's a "traditional Catholic".
The level of cognitive dissonance is impressive.
If it's just about limiting access, Cloudflare imposes a similar limitation of number of accesses you can have to a website via remote attestation. I think once remote attestation becomes more prevalent, it might become useful in the ad business too, e.g. to prevent you from using ad blockers, or similar things.
I do not agree with this person's conclusions on any article they've self-promoted on this site, but their self-confidence is unparalleled.
Why can't there be a "local attestation server" equivalent to Lets Encrypt, e.g. offering the Top 10 most-requested OS configurations which are not being addressed by digital overlords?
Cryptographer priests are scarce, but not numerically capped or fully monopolized by digital overlords.
The basic guiding principle in force since HIPAA in 1996 is that patients, not providers, control access to their medical records regardless of whether those are stored on paper or in an EHR. If the patient authorizes sharing those records with another healthcare organization then the provider can charge a small fee for that service but they can't introduce additional spurious technical requirements on the receiving system.
Also, since a lot of different movie streaming services (e.g. Hulu, Disney+) have launched, a lot of content has moved off of Netflix, leading to a higher piracy rate.
Might be overseen by a neutral group, but it was spawned out of them.
And I'm sorry, but no. Absolutely not. If I have to teach someone to do a damn Certificate signing request just to say, get a kernel tweak done, or (nightmare mode) just to run a self-written hello world because the powers that be have decided that nothing less than perfect non-repudiation of every binary ever built from now on is acceptable; (the logical terminis of "apply cryptography to programming until top down control is realized)... I'm not even completing the thought. This is a bad, bad, bad, bad, bad idea.
If people feel they're being hampered too much by Windows' DRM they'll most likely switch over to some RISC-V processor from some Chinese company. I'll bet you that if that happens Microsoft will flush Pluton down the toilet before you can say "Clippy." It simply isn't worth it to them. Sure they want to lock in their customers even more, but as soon as they start leaving in droves they'll swiftly move to stem the outflow.
In practice, both a good bike lock and remote attestation raise the bar against attacks significantly, without providing 100% security.
If you are in the US, take a look at the recently approved UCC changes for CERs (controllable electronic records, e.g. blockchains and CBDCs), which will now proceed to US state legislatures, https://www.clearygottlieb.com//news-and-insights/publicatio...
This will not work because the concerns about US based services are legal ones due to access requirements by the US government which cannot be solved by technical restrictions while still complying with those requirements.
In the past you culd use your FOSS client to commicate with your ICQ, AIM, MSN Messenger-using friends. Today, not using the official client will likely get you banned from the network.
It's not "theirs", they have only been granted a limited-time monopoly on it in order to incentivize the initial creation. If they abuse that monopoly, we (i.e. society) CAN take it away.
If only... Copyright monopolists lobbied governments to the point they extended the duration of that monopoly to multiple lifetimes. About 5 years of copyright protection is more than enough for creators to make their money back and then some but for some reason these people saw the need for it to last centuries. Copyright is functionally infinite, we'll all probably die before the works we enjoyed enter the public domain.
In effect, we've all been robbed of our public domain rights. The social contract was: we'll all pretend the creator's works are scarce for a while so they can make money, and then it will enter the public domain. These monopolists aren't really keeping up their end of the bargain so why should we keep ours? The second we stop pretending, they're done.
The US gov can walk into any company and demand everything and anything they want while making it illegal for anyone at that company to say a damn thing to anyone about it. This includes taking over parts of that company's facilities and taking a copy of every last bit of data that goes in and out (see room 641A - they've been doing it for ages).
"secure" enclaves can't save us here because the companies who develop them are subject to the same government who can insist on adding backdoors in their products. Even without explicit support of the companies involved we've already seen side-channel attacks that allow access to the data in enclaves.
As for end to end encrypted messengers, it's reasonable to suspect that once they gain enough popularity they will be compromised in some form or another. Signal, for example, had gotten a lot of attention followed by another huge jump in popularity after WhatsApp changed their privacy policy.
Signal also suddenly started collecting and storing sensitive user data in the cloud, they ignored protests from their users about it, were extremely shady in their communications surrounding that move, and have never updated their privacy policy to reflect their new data collection practices. Does that mean that Signal has been compromised? In my opinion, probably (refusing to update their privacy policy is a huge dead canary), but even if it hasn't it absolutely means the government can march in and take whatever they want including data they'd have to use a backdoor or an exploit to access.
Lawmakers have been trying to ban or control end to end encryption for years. (See https://www.forbes.com/sites/zakdoffman/2020/06/24/new-warni... or https://www.eff.org/deeplinks/2020/07/new-earn-it-bill-still... or https://www.cnbc.com/2020/10/12/five-eyes-warn-tech-firms-th...) and while they've so far been kept at bay eventually they'll succeed in sneaking it past us in one form or another.
For now, it's perhaps better in their view to let us think our communications are more secure than they are. (See https://www.zdnet.com/article/australias-encryption-laws-use... and https://gizmodo.com/the-fbis-fake-encrypted-honeypot-phones-...)