1. The attacker manufactures a device, such as a smartphone, generates a keypair for it, stores it on an HSM on the device (generally called a "trusted enclave"), and signs the public key of the keypair with a master key
2. The device runs the attacker's software and is designed so that whenever non-attacker software is run with elevated privileges, the HSM is informed of that fact in a way that can't be reset without rebooting (and starting again with the attacker's software). For instance, the device might use a verified boot scheme, send the key the OS is signed with to the HSM in a way that is unchangeable until reboot, and it might employ hardening like having the CPU encrypt RAM or apply an HMAC to RAM
3. The HSM produces signatures of messages that contain statements that the device is running the attacker's software, plus whatever the attacker's software wants to communicate and it won't produce them if it's running software of the user's choice as opposed to the attacker's software as established above. It also includes the signature of its public key with the master keypair, allowing accomplices to check that the device is indeed not under the user's control, but rather under the control of someone they trust to effectively limit the user's freedom
4. Optionally, that attestation is passed through the attacker's servers, which check it and return another attestation signed by themselves, allowing to anonymize the device and apply arbitrary other criteria
5. Conniving third parties can thus use this scheme to ensure that they are interacting with a device running the attacker's software, and thus that the device is restricting the user behavior as the attacker specifies. For instance, it can ensure that the device is running the accomplice's code unmodified, preventing the user from being able to run software of their choice, and it can ensure that the user is using device as desired by the attacker and their accomplices.
This attack is already running against Android smartphone users (orchestrated by Google, in the form of SafetyNet and the Play Integrity API) and iOS smartphone users (orchestrated by Apple) and this extends the attack to the web.
And this "attacker" gets... what? Nothing. Because this isn't an attacker... it's a device manufacturer. You've described how attestation works except you've described the TPM as an attacker, which is silly.
They sell the attack to business partners like Netflix and Spotify.
Effectively, they are selling the end users' liberty (ability to run arbitrary software, including for example, a cracked ad-free version of the Spotify app) to those business partners.
In sales-speak, this is framed as "effective Digital Rights Management", with "Rights" meaning "copyright enforcement". Critically, DRM is not a viable methodology until you provide it this attack surface.
It's also worth noting that YouTube is one of those business partners, and both Android and YouTube are owned by the same corporation: Alphabet.
Relative to their current position of already owning the hardware?
> They sell the attack to business partners like Netflix and Spotify.
I don't see how they're "selling" anything. Web Integrity requires no money to change hands. If implemented, Netflix + Spotify would owe Google nothing.
DRM is the tool that guarantees money will change hands. Without it, there is nothing but a social (legal) threat to prevent people copying and distributing copyrighted content for free.
Forcing users to run the DRM-infected version of an app creates an incentive for Netflix and Spotify to participate on the Android platform; which in turn strengthens Android's position, and the Google Play Store as a market.
This incentive goes both ways for YouTube, because it is owned by Alphabet.
> If implemented, Netflix + Spotify would owe Google nothing.
Yes, but that's not the point. Google wants Netflix and Spotify to have Android apps. Netflix and Spotify want DRM infecting their apps. Without this system in place, users can disinfect the Spotify app, and listen to music without paying Spotify money (or watching ads to pay them indirectly).
Without providing the environment for functional DRM, Netflix and Spotify can simply refuse to make Android apps. That would be a pretty weak threat, except that YouTube wants the same thing; and that incentivizes Android to play ball.