There are no use case about these technologies being used by a dystopian country. No use case about enabling anti-competitive practices from incumbent companies. Seemingly little to no care or attempts to balance the longer term strategic impacts of these technologies on society, such as loss of innovation or greater fragility due to increased centralisation/monopolisation of technology. No cost-benefit analysis or historical analysis for identified threat actors likelihood to compromise TPMs and attested operating systems to avoid these technologies (there's no shortage of Widevine L1 content out there on the Internet). No environmental impact consideration for blacklisting devices and having them all thrown into a rubbish tip too early in their lifespan. No political/sovereignty consideration to whether people around the world will accept a handful of American technology companies to be in control of everything, and whether that would push to the rest of the world to abandon American technology.
The majority of the contributors to these projects appear to be tech employees of large technology companies seemingly without experience outside of this bubble. Discussions within the group at times self-identify this naivety. The group appears very hasty to propose the most drastic, impractical technical security controls with significant negative impacts such as whitelisting device hardware and software. But in the real world for e.g. banking fraud, attacks typically occur through social engineering where the group's proposed technical controls wouldn't help. There appears to be little to no attempt made to consider more effective real world security controls with fewer negative impacts, such as delaying transactions and notifying users through multiple channels to ensure users have had a chance to validate a transaction or "cool off".
[1] https://github.com/antifraudcg/use-cases/blob/main/USE-CASES...
[2] https://owasp.org/www-project-automated-threats-to-web-appli...
> Some examples of scenarios where users depend on client trust include:
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
So it's essentially Google further entrenching its tentacles in web standards in the most invasive ways with no regards towards privacy and user control. It's a shame what the W3C has degenerated into.
[1] https://github.com/RupertBenWiser/Web-Environment-Integrity/...
It's web 2.0, user is a product.