We would benefit from a better public discussion of what "security" encompasses. Else, we risk conflating "what MS wants me to do with my computer" with "preventing hackers from stealing my credit card number".
Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.
That's a classic "road to hell paved with good intentions". The approaching reality is more like:
Imagine a world where to be allowed to use the Internet you will be mandated to run certain software, which reports your personal information to a company you are obligated to use, and whose use of that information is absolutely something you do not want.
Yes, the problem is indeed "who's using it". Unfortunately you aren't going to be able to decide either, and it will certainly be used against you.
Ask that question every time you see the word "security" written. There is no such word as bare security.
- security for who?
- security from who?
- security to what ends?
Much of the time security is a closed system, fixed-sum game. My security means your loss of it.
> - security for who?
Riot Games
> - security from who?
The users of their software.
> - security to what ends?
Ensuring a device (A) is running windows (B) is running unmodified Windows system files (C) a rootkit that replaces syscall behavior isn't installed
All of this is an effort to prevent cheats that wallhack/aimbot or otherwise give the player an unfair advantage - at least, it ensures the cheats aren't loaded early enough to where their anti-cheat is unable to detect their influence on the game process.
While i say 'Riot Games' is who benefits, it's all at the request of their users; you can search for 'hacker' or 'cheats' on r/leagueoflegends and see tons of posts from years ago complaining about cheaters scripting (automatically using abilities in the best possible way) and gaining an unfair advantage against them. Every posts' comments will boil down to "Riot really should figure out how to stop these cheaters". It's a cat-and-mouse game, but it'll be a lot easier to catch the mouse once they can safely enable the remote attestation requirement and only lose 0.1% of their players.
On the less moral side, this can also be applied to single-player games to reduce the chances of a game's anti-piracy protections being cracked.
It's like putting a camera network and automated tranq drones in every playground so kids don't play tag 'wrong'.
This insanity of trying to conflate complete submission to a third party with trust or security when in reality it provides neither because that party is an adversary is a society-wide mental illness.
I play some games like Valorant which use Ring 0 anti-cheat mechanisms, and to do this I have a Corsair i300 which I bought basically exclusively for FPS, flight simulators, and other games that I enjoy. I'm actually equally unhappy with corporate-provided Mobile Device Management and "Endpoint Protection" technologies being on personally-owned devices, but one clear solution is to just physically partition your devices by purpose and by what restrictions you're willing to tolerate on them. "But I can't do what I want with the hardware that I own" is a bit of a misnomer, you can, you just might not also have the right to participate in some communities (those that have 'entry requirements' which you no longer meet if you won't install their anti-cheat mechanisms).
Why tolerate Riot Games, why not "play games with a community that has accountability"? It's simple for me: in the extremely limited free time that I have for this activity, my objective is to click <PLAY> and quickly get into a game where my opponents are 'well balanced' (matched against my own abilities) and servers which are not infested with cheaters.
Without any question in my mind, cheaters utterly ruin online multiplayer games, Team Fortress 2 has been a haven of bots and cheats for several years and Valve is only recently starting to take steps to address.
I have exactly zero desire to spend time "locating communities with accountability". I want a matchmaking system provided by Riot Games which simply doesn't tolerate cheating, period. I'm willing to be in that community even with its 'entry requirements'. You may not be willing to submit to those entry requirements and that's okay. You should advocate that games support your desire to launch without anti-cheat protections, and restrict you to playing on 'Untrusted Servers' outside the first-party matchmaking community, where you will enjoy no anti-cheat protection, and you can gather freely with your own "communities with accountability".
Informed consent requires the consenter have understanding of what is happening, know what the implications are and agree. Riot games anticheat software doesn'tpass the first two, and is largely irrelevant to the conversation because this use case is a trojan horse anyway.
Community and social graph is a finite resource. I can't just go get another one if you colonise mine.
This is exactly the same argument libertarians have against food safety and labelling regulations. I can't go get baby formula without melamine in it if every brand has it because they price dumped to bankrupt the competition and I don't have a chemistry lab to test for it.
I can't go find another bank if they all switch to requiring attestation. I can't go buy another government. I can't go find a new social graph if everyone on it is on facebook.
Operating systems and CPUs are utilities with natural monopolies, as is communication software. Treating an ecosystem, a community, and a social graph as a fungible good is a blatant lie.
Earlier this spring, Easyanticheat crashed the Windows 11 Insider kernel and a good deal of games were unplayable for weeks.
The premise of personal computing is that my computer works as my agent. For any remote party that I'm interacting with - their sphere of influence ends at the demarcation point of the protocol that we interact with. Attempts to dictate what software my computer can run when interacting with them are unjust, and ultimately computationally disenfranchising. Despite the naive references littered throughout this thread to users being able to verify what software companies are running, it will never work out that way because what remote attestation does is magnify existing power relationships. This is why so many people are trying to fall back to usual the crutch of "Exit" as if going somewhere else could possibly tame the power imbalances.
Practically what will happen is that, for example, online banks (and then web stores, and so on) will demand that you only can use locked down Apple/Windows to do your online banking. This will progress somewhat evenly with all businesses in a sector, because the amount of people not already using proprietary operating systems for their desktop is vanishingly small. Which will destroy your ability to use your regular desktop/laptop with your regular uniformly-administered OS, your nice window manager, your browser tweaks to deal with the annoying bits of their site, your automation scripts to make your life easier etc. Instead you'll be stuck manually driving the proprietary Web TV experience, while they continue to use computers to create endless complexity to decommodify their offerings - computational disenfranchisement.
I'll admit that you might find this argument kind of hollow with respect to games, where you do have a desire to computationally disenfranchise all the other players so it's really a person-on-person game. But applying these niche standards of gaming as a justification for a technology that will warp the entire industry is a terrible idea.
That world already exists, it just doesn't get used much. You can do this with Intel SGX and AMD SEV.
The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).
RA gets demonized by people making the arguments found in the sibling comment, but they end up throwing the baby out with the bathwater. There are tons of privacy, control and decentralization problems that look intractable until you throw RA in the mix, then suddenly solving them becomes easy. Instead of needing teams of cryptographers to invent ad-hoc and app specific protocols for every app (which in reality they never do), you write a client that RAs the server to check that it's running software that won't leak your private information as part of the connect sequence.
A company, an organization or an individual can have security guards, security procedures, etc. Security can protect the organization from objectively malicious threats, but security can also mean protection from any real or perceived threat to someone's interests.
Security can also protect an organization from the leakage of embarrassing or potentially incriminating information. An authoritarian regime has security to prevent it from being challenged. Security guards at an industry might stop activists from getting to the grounds to gather evidence of harm to the environment or people. Indeed, security staff would stop unauthorized people regardless of those people's intentions.
All of those are examples of security even if other people's legitimate interests were in conflict with it.
Security is for someone, and from someone or something.
Any cheater will probably still do really well against another cheater while a human won’t have a chance. I think this is kind of like shadow banning?
Only if by "entire point of capitalism", you mean the philosophical paradigm that highly centralizing corporations market to gain more power and ultimately undermine the distributed sine qua non of capitalism.
> Saying otherwise is effectively suggesting that companies be forced to make product in a certain way to accommodate your requests.
You're missing market inefficiency and the development of Schelling points based on the incentive for uniformity. In this case specifically, the inability of a company to investigate what I am running on my computer creates the concept of protocols, and keeps each party on a more even footing. Remote attestation changes that dynamic, undermining the Schelling point of protocols and replacing them with take-it-or-leave-it authoritarianism extending further into our lives.
This will not work because the concerns about US based services are legal ones due to access requirements by the US government which cannot be solved by technical restrictions while still complying with those requirements.
The US gov can walk into any company and demand everything and anything they want while making it illegal for anyone at that company to say a damn thing to anyone about it. This includes taking over parts of that company's facilities and taking a copy of every last bit of data that goes in and out (see room 641A - they've been doing it for ages).
"secure" enclaves can't save us here because the companies who develop them are subject to the same government who can insist on adding backdoors in their products. Even without explicit support of the companies involved we've already seen side-channel attacks that allow access to the data in enclaves.
As for end to end encrypted messengers, it's reasonable to suspect that once they gain enough popularity they will be compromised in some form or another. Signal, for example, had gotten a lot of attention followed by another huge jump in popularity after WhatsApp changed their privacy policy.
Signal also suddenly started collecting and storing sensitive user data in the cloud, they ignored protests from their users about it, were extremely shady in their communications surrounding that move, and have never updated their privacy policy to reflect their new data collection practices. Does that mean that Signal has been compromised? In my opinion, probably (refusing to update their privacy policy is a huge dead canary), but even if it hasn't it absolutely means the government can march in and take whatever they want including data they'd have to use a backdoor or an exploit to access.
Lawmakers have been trying to ban or control end to end encryption for years. (See https://www.forbes.com/sites/zakdoffman/2020/06/24/new-warni... or https://www.eff.org/deeplinks/2020/07/new-earn-it-bill-still... or https://www.cnbc.com/2020/10/12/five-eyes-warn-tech-firms-th...) and while they've so far been kept at bay eventually they'll succeed in sneaking it past us in one form or another.
For now, it's perhaps better in their view to let us think our communications are more secure than they are. (See https://www.zdnet.com/article/australias-encryption-laws-use... and https://gizmodo.com/the-fbis-fake-encrypted-honeypot-phones-...)