> How does this affect browser modifications and extensions?
> Web Environment Integrity attests the legitimacy of the underlying hardware and software stack, it does not restrict the indicated application’s functionality: E.g. if the browser allows extensions, the user may use extensions; if a browser is modified, the modified browser can still request Web Environment Integrity attestation.
Then what's the point? I can make modified bot browser that commits ad fraud as long as I don't use a rooted Android phone?
I don't believe they're being honest with how this will be used. We need to legally regulate remote attestation.
> As new browsers are introduced, they would need to demonstrate to attesters (a relatively small group) that they pass the bar, but they wouldn't need to convince all the websites in the world.
It speaks for itself. Horrid.
Another use case is multiparty computation. Three people wish to compare some values without a risk that anyone will see the combined data. TC can do this with tractable compute overhead, unlike purely cryptographic techniques.
Observe what this means for P2P applications. A major difficulty in building them is that peers can't trust each other, so you have to rely on complex and unintuitive algorithms (e.g. block chains) or duplication of work (e.g. SETI@Home) or benign dictators (e.g. Tor) to try and stop cheating. With TC peers can attest to each other and form a network with known behavior, meaning devs can add features rather than spend all their time designing around complicated attacks.
These uses require you have a computer that you do trust which can audit the remote server before uploading data to it. But you can compile and/or run that program on your laptop or smartphone, the verification process is easy.
But exactly because TC is general it doesn't distinguish based on who owns the machine. It doesn't see your PC as morally superior to a remote server, they're all just computers. So yes, in theory a remote server could demand you run something locally and then do a HW remote attestation for it. In practice though this never happens anymore outside of games consoles (as far as I'm aware), because most consumer devices don't have the right hardware for it, and even if they did you can't do much hardware interaction inside attested code.
Trusted computing is often used such that one might think, it implies the user can trust something (his computer). But it is the other way around. A service provider can trust a machine - that a user bought -- to not do what the user wants. That is misleading at best.
1. Clients verifying cloud VMs. The "user" in this case is not the same person as "his" in "his computer".
2. Games consoles being verified by PS/Xbox online services. In this case the users are in effect verifying each other, because part of why they need such tough security is to stop online gaming being wrecked by cheaters.
At a stretch you could talk about credit card chips and the ATM network as (3) but that's far enough away from general computing that it doesn't really count.
In both cases these are firmly pro-consumer use cases. Unless you want to pirate or cheat in gaming of course, but there's plenty of users who don't want to subsidize your fun with their own suffering, so they're happy to rely on TC to stop that. It's a big part of why console gaming dominates PC gaming. It's wrong for the FSF to imply all those people are brainwashed dupes because they have a different value system to Richard Stallman.