1. The attacker manufactures a device, such as a smartphone, generates a keypair for it, stores it on an HSM on the device (generally called a "trusted enclave"), and signs the public key of the keypair with a master key
2. The device runs the attacker's software and is designed so that whenever non-attacker software is run with elevated privileges, the HSM is informed of that fact in a way that can't be reset without rebooting (and starting again with the attacker's software). For instance, the device might use a verified boot scheme, send the key the OS is signed with to the HSM in a way that is unchangeable until reboot, and it might employ hardening like having the CPU encrypt RAM or apply an HMAC to RAM
3. The HSM produces signatures of messages that contain statements that the device is running the attacker's software, plus whatever the attacker's software wants to communicate and it won't produce them if it's running software of the user's choice as opposed to the attacker's software as established above. It also includes the signature of its public key with the master keypair, allowing accomplices to check that the device is indeed not under the user's control, but rather under the control of someone they trust to effectively limit the user's freedom
4. Optionally, that attestation is passed through the attacker's servers, which check it and return another attestation signed by themselves, allowing to anonymize the device and apply arbitrary other criteria
5. Conniving third parties can thus use this scheme to ensure that they are interacting with a device running the attacker's software, and thus that the device is restricting the user behavior as the attacker specifies. For instance, it can ensure that the device is running the accomplice's code unmodified, preventing the user from being able to run software of their choice, and it can ensure that the user is using device as desired by the attacker and their accomplices.
This attack is already running against Android smartphone users (orchestrated by Google, in the form of SafetyNet and the Play Integrity API) and iOS smartphone users (orchestrated by Apple) and this extends the attack to the web.
This Web Integrity API is just a means to cement themselves as obligatory man in the middle, as opposed to an optional one.
I'm not a tinfoil hat, but security can't hang it's hat on the kindness of strangers.
[0] https://us.boell.org/en/2019/10/17/web-partner-companies-kee...
[1] https://tvpworld.com/40781592/another-letter-from-us-ambassa...
Highly abstract risks just dont seem to register for most people. It was hard enough to get the masses to act in self interest over an existential risk to their health (covid).
I reckon the way to avoid maximum damage from this proposal will be some sort of inoculation - e.g. safe, trusted, easy to use tools that help people work around it. The political angle of attack is worth trying but I think it will fail.
I wish Mozilla worked that angle too - e.g. supporting lineage and microg.
6. Any additional party with sufficient ability to modify hardware can still attack the attacker and their accomplices. So such parties only benefit from this, at the cost of typical users.
And this "attacker" gets... what? Nothing. Because this isn't an attacker... it's a device manufacturer. You've described how attestation works except you've described the TPM as an attacker, which is silly.
Given that SSO is a massive security win and has been a game changer for removing passwords, I think it's been shown that delegation is extremely effective.
I remember in the long, long ago, when I actually visited a BUILDING to do some of my banking tasks. And when I bought physical media that took up actual 3D space in my house to watch movies. I suspect we aren't incapable of going back.
It's just that this description is describing an "attack" that is just how attestation works. If you have a problem with attestation, talk about that problem, calling it "an attack" does nothing.
I'm actually against the proposal, too - although I see the merits. The ability to have servers authenticate clients based on the context of that client is amazing - it would seriously improve security if done right. But I personally believe that this should be done through the Device Policy extension exclusively, as it is already done there today, and that the extension should be opened and standardized.
In fact, I believe Google should be forced to do so.
It sure is not. But I do believe we should have a legal right to own our own hardware, in every sense.
1. Instead of needing 100 passwords, which increases the chance of users just choosing something and repeating it, you have 1 password.
2. Similarly, instead of needing 2FA on 100 sites they can just have 2FA on their SSO. In fact, the other sites don't even need to support 2FA - you get that "for free" with SSO.
3. SSO providers implement auth really well. They make it smooth, as in "I don't have to reauth when it's obviously me" and safe, as in "that might not be a valid auth, let's get them to 2fa again".
Of course, if you have a password manager then (1) is not a problem. But SSO is a lot simpler for users.
a) SSO has no financial cost. Hardware keys do.
b) SSO has been implemented and standard for years and is trivial for sites to support, hardware keys are much newer and are still rarely supported for authentication.
c) You can use hardware keys with SSO, which I'd recommend, and now you've gotten the benefits of both.
* I can actually run Google Pay because the original SafetyNet API was software backed. So I can spoof a signature from an old device that didn't support hardware attestation. In particular my Pixel 4a claims to be a Nexus 5 so that Google's servers don't expect a hardware signature. But I'm sure that the clock is ticking until these apps (or Google globally) stop considering software backed validation acceptable. I'm quite sure that this Web Integrity API will be hardware backed from the start.
We don't have to be OK with it, but it seems inevitable that everything is just going to shit. Starting with smartphones. That's why my current smartphone will be my last one. The cost/benefit of them is no longer favorable.
I think that the web itself will be the next casualty.
They sell the attack to business partners like Netflix and Spotify.
Effectively, they are selling the end users' liberty (ability to run arbitrary software, including for example, a cracked ad-free version of the Spotify app) to those business partners.
In sales-speak, this is framed as "effective Digital Rights Management", with "Rights" meaning "copyright enforcement". Critically, DRM is not a viable methodology until you provide it this attack surface.
It's also worth noting that YouTube is one of those business partners, and both Android and YouTube are owned by the same corporation: Alphabet.
Relative to their current position of already owning the hardware?
> They sell the attack to business partners like Netflix and Spotify.
I don't see how they're "selling" anything. Web Integrity requires no money to change hands. If implemented, Netflix + Spotify would owe Google nothing.
Yes, in terms of buildings. But I see as many RedBox kiosks around as ever.
DRM is the tool that guarantees money will change hands. Without it, there is nothing but a social (legal) threat to prevent people copying and distributing copyrighted content for free.
Forcing users to run the DRM-infected version of an app creates an incentive for Netflix and Spotify to participate on the Android platform; which in turn strengthens Android's position, and the Google Play Store as a market.
This incentive goes both ways for YouTube, because it is owned by Alphabet.
> If implemented, Netflix + Spotify would owe Google nothing.
Yes, but that's not the point. Google wants Netflix and Spotify to have Android apps. Netflix and Spotify want DRM infecting their apps. Without this system in place, users can disinfect the Spotify app, and listen to music without paying Spotify money (or watching ads to pay them indirectly).
Without providing the environment for functional DRM, Netflix and Spotify can simply refuse to make Android apps. That would be a pretty weak threat, except that YouTube wants the same thing; and that incentivizes Android to play ball.
Those apps already exist. Don't you think that kind of undermines your entire point?