Aka corporations insist on control & want to make sure users are powerless when using the site. And Chrome is absolutely here to help the megacorp's radically progress the War On General Purpose Computing and make sure users are safe & securely tied to environments where they are powerless.
There's notably absolutely no discussion or mention of what kind of checks an attestation authority might give, other than "maybe Google Play might attest for the environment" as a throwaway abstract example with no details. Any browser could do whatever they want with this spec, go as afar as they want to say, yes, this is a pristine development environment. If you open DevTools, Google will probably fail you.
It appalls me to imagine how much time & mind-warping it must have taken to concoct such a banal "user motivation" statement as this. This is by the far the lowest & most sold-out passed-over bullshit I have ever seen from Chrome, who generally I actually really do trust to be doing good & who I look forward to hearing more from.
You know, to ensure the 'integrity' of the 'web environment'.
https://www.bleepingcomputer.com/news/security/451-pypi-pack...
This is already what is happening with SafetyNet on Android. For now most applications don't require hardware attestation so you can pass by spoofing an old device that didn't support hardware attestation but I'm sure that will change within a decade.
This is the one I'd be worried about. Thought it was annoying to not be able to use banking apps on a rooted Android? Think about how annoying it will be when you can't do much of anything, even on the Web, unless it's from a sealed, signed Apple/Google/Microsoft image-based OS...
I realize the way Firefox's user share is going, it might not matter or they might feel they don't have a choice but I really, really hope Mozilla doesn't even remotely consider implementing this.
These are mega corporations and you aren’t the client. They aren’t making Chrome “for you”. They are for optimizing for Advertisers.
Being able to trust the security of a client can protect against many attacks and it is up to web sites to evaluate what to do with into information that a client is proven to be secure.
It's morbidly amusing to see the browser referred to as a "user agent" here.
> A user agent is a computer program representing a person, for example, a browser in a Web context.
https://developer.mozilla.org/docs/Glossary/User_agent
> Examples include all common web browsers, such as Google Chrome, Mozilla Firefox, and Safari
Look, it isn't that bad, but enough to make me do it. It's obnoxious.
Wouldn't it be great if you never had to deal with another captcha?
SafetyNet means the app checks to make sure you're not rooted or running a custom ROM because those are considered a security risk. If you are not running a locked-down OEM ROM, you can't run many apps including banking apps.
Microsoft's Pluton on-CPU attestation technology means this is coming to PCs.
- What is the least expensive device that can be certified like that? The least expensive process?
- What is the highest level of openness such a device can offer to the user, and why?
To my mind, it would be best to have an option of a completely locked down and certified hardware token, a device like a Yubikey, that could talk to my laptop, desktop, phone, or any other computing device using a standard protocol. As long as it's unforgeable, the rest of the system can be much. much less secure, without compromising the overall security.
Keep it powered down when not needed for extra security.
Idealy, it could be smaller than a smartphone, and use smartphone's or laptop's hardware for UI and networking.
"Don't be evil" has really turned into "Google is evil"
Guess what, it wasn't free and now it's time to pay up.
You have to remember, from their point of view they are writing the web software and when a user agent is non-compliant, it gets in their way. UAs with weird quirks translate to impossible-to-reproduce bugs, so the default bias is in favor of standardization and regularity.
As time goes on hand-waving the matter as "user's responsibility" is becoming a less and less acceptable answer. Hard assurances are being demanded and applied technologies are progressively patching the existing loopholes.
>means the app checks to make sure you're not rooted or running a custom ROM
The purpose is to be able to tell if the user is running a version of the app is from the play store or to be able to tell if the device's integrity isn't compromised meaning that it can not rely on the security guarantees the OS provides. Banking apps are not against people using custom ROMs. They just want to ensure they are running on a secure operating system.
Gluttony, greed, envy, and arrogance. This is truly sickening.
Online fraud and theft is exploding right now and the average person is simply not capable of securing a laptop so the companies have decided to only allow secure access through a phone which can usually be trusted to be malware free.
Apologies for the simple question, but wouldn't forks of popular browsers crop up without this attestation API implemented? Or is it a thing where websites themselves would potentially refuse traffic from browsers that didn't support it?
"This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure."
The smoking gun is "intellectual property". In a conventional browsing session the website has no idea what the human user is going to do with copyright-protected information published on the website. Hence, it assumes good intent and grants open access.
In the case of an AI scraper, assuming you detect it reliably, the opposite is true. Bad intent is assumed as the very point of most AI scrapers is to harvest your content with zero regard for permission, copyright or compensation.
To make this work, Google outsources the legal liability of distinguishing between a human and a bot to an "attester", which might be Cloudflare. Whatever Cloudflare's practice is to make this call will of course never be transparent, but surely must involve fingerprinting and historical record keeping of your behavior.
You won't have a choice and nobody is liable. Clever!
Not to mention the extra new avenue created for false positives where you randomly lose all your shit and access, and nobody will explain why. Or, a new authoritarian layer that can be used for political purposes to shut down a digital life entirely.
All of this coming from Google, the scraping company.
I have a much simpler solution: it should be illegal to train AI on copyrighted content without permission from the copyright holder. Training AI is not the same thing as consuming information, it's a radically new use case.
I don't know. I haven't personally gone through the process.
>What is the highest level of openness such a device can offer to the user, and why?
You have to follow the CDD. https://source.android.com/docs/compatibility/13/android-13-...
and you of course must pass the compatibility tests. So it can be as open as you would like as long as you do not break the android security model.
>it would be best to have an option of a completely locked down and certified hardware token, a device like a Yubikey
That approach is limiting since secrets can't be passed to the host operating system and compute with secrets have to happen on the secure device.
As long as Windows users are allowed to remain as out of date on patches as they are, and depending on what the browser users as its attestation "source", I don't see how the browser and website can ever meaningfully establish the validity of the statement "the client is trusted to be malware free".
I run a custom build of Firefox, on a (somewhat, still-ish) niche Linux OS, with the kernel and bootloader signed by my own signing keys. What could I attest with, that will make some banking website perceive me as a trustworthy client?
The second this becomes widely available, it won't mean "bypass captchas" - it will mean "can't bank unless you use up-to-date Android or latest iOS".
It's too hard for even someone who is highly knowledgeable to know if they have malware, let alone the average person.
There are no use case about these technologies being used by a dystopian country. No use case about enabling anti-competitive practices from incumbent companies. Seemingly little to no care or attempts to balance the longer term strategic impacts of these technologies on society, such as loss of innovation or greater fragility due to increased centralisation/monopolisation of technology. No cost-benefit analysis or historical analysis for identified threat actors likelihood to compromise TPMs and attested operating systems to avoid these technologies (there's no shortage of Widevine L1 content out there on the Internet). No environmental impact consideration for blacklisting devices and having them all thrown into a rubbish tip too early in their lifespan. No political/sovereignty consideration to whether people around the world will accept a handful of American technology companies to be in control of everything, and whether that would push to the rest of the world to abandon American technology.
The majority of the contributors to these projects appear to be tech employees of large technology companies seemingly without experience outside of this bubble. Discussions within the group at times self-identify this naivety. The group appears very hasty to propose the most drastic, impractical technical security controls with significant negative impacts such as whitelisting device hardware and software. But in the real world for e.g. banking fraud, attacks typically occur through social engineering where the group's proposed technical controls wouldn't help. There appears to be little to no attempt made to consider more effective real world security controls with fewer negative impacts, such as delaying transactions and notifying users through multiple channels to ensure users have had a chance to validate a transaction or "cool off".
[1] https://github.com/antifraudcg/use-cases/blob/main/USE-CASES...
[2] https://owasp.org/www-project-automated-threats-to-web-appli...
Generally I am pro Project Fugu & pro building bigger better web. Google spends an enormous amount of effort working on specs with w3c, wicg, and other browser implementers advancing incredibly good & useful causes. They spend huge effort enhancing DevTools so everyone can work the web.
Building a good & capable web is necessary for Google to survive. An open & capable web is the only sustainable viable alternative the world has seen to closed proprietary systems, which from history we can see have far more risks hazards & entailed pernicious or particular behaviors.
Generally Googles effort to make the web a good viable & healthy platform aligns with my vision. That they want to do good things & make a great connected world wide web because the web's thriving helps them run their advertising business typically does not create a big conflict for me. I'm usually happy with the patronage the web receives & I dread it ever drying up, and it saddens me people are so monofocused, so selective in focusing on only on bad, and I think that perception hurts us all.
What my experience has taught me is that you have these 80% things that are good, but there is the one person or thing that ruins it for everyone. One person, one manager or CEO who pushes something through because he wants some gain, or one selfish move that is born out of short term profit or thinking.
From climate change, to wars, to ill-willed software, history sometimes get bend by those bad decisions sometimes stemming from a comparatively small but powerful group who yield too much power. Google is for all purposes a monopoly which makes all their decisions at least suspect since they aren’t competing on the same level as a Mozilla, or name any other search engine. This is bad for any ecosystem.
I wish I was still seeing the early Google that was optimistic, people focused, approachable, but that time is at least some years in the past. There are probably good people working for Google still with that ethos, but it gets overshadowed by those nagging decisions that are suspect.
Even in this very thread there are people saying this is not so bad because “it will help prevent fraud”
lmao.
do they realize that you can use a custom certificate / patch the check routines? I don't think they quite realize what they are even suggesting.
> Some examples of scenarios where users depend on client trust include:
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
So it's essentially Google further entrenching its tentacles in web standards in the most invasive ways with no regards towards privacy and user control. It's a shame what the W3C has degenerated into.
[1] https://github.com/RupertBenWiser/Web-Environment-Integrity/...
AKA as long as you don't give control to the user.
I don't want to have to agree to Microsoft or Apple's ToS so that I can access my bank.
I do not look forward to trying to find a bank that doesn't require this of me because all of the major banks have jumped on board.
A system being secure doesn't mean that the user doesn't have control. The operating system should allow the user to control it, but only in a secure way that doesn't compromise the rest of the security of the system. The Windows way of having an administrator account or Linux of having a root account given to the user has been proven over time to be worse for security. Windows has been trying to roll back this mistake, but most Linux distributions don't do anything because they don't care that much about security compared to an operating system like Android.
I wanted to extract some data files from an app I was using and Google's Android told me that I was not allowed to do that. That was the apps data not my data.
It doesn't really matter root/fine grained permissions. The fact is that on stock Pixel phones the user can't access whatever data they want. So in practice they don't have control.
Usually banks don't let you disable antifraud protections. They prefer to make their business and the banking system more secure by reducing the rate of fraud. Fraud is expensive for them to deal with so it doesn't really make financial sense to let customers say that they are okay with having more fraud happen using their account.
It has to stop somewhere. 100% security may reduce the banks' fraud costs but it isn't acceptable for personal freedom. "Choose a different bank then" only works until all they all adopt it.
So the server is wildly insecure and wants to make it my problem.
Take for example a simple spam bot. The bot authenticates and then starts sending spam to people. Detecting spam and spammers server side is an imperfect art. It is a constant game of doing things to reduce the rate of spam. It can help a lot if you can ensure that only your client is able to work with your service. This means that attackers can't just write some python script and deploy it somewhere. They have to actually be running your app and actually liking the content in the app. This increases the costs for attackers and reduces the amount of spam.
Both client and server security is important.
I assume an old person cares about not being left poor and helpless in retirement more than they care about free software and computing freedom.
I think it's probably likely that we will end up in a situation where some devices like phones and maybe laptops are considered "secure environments" where banking transactions and such can be safely executed, while alternative devices will be available for complete freedom and tinkering. You'll likely always be able to run any program you want on your laptop but those programs will be limited to their own sandbox rather than having free access to any other programs data.
And the alternative is taking a picture of the QR code.
> Additionally just because someone is using a device that doesn't mean that the current user is the owner of the device.
Yeah that's why you make the owner authenticate. It would be ridiculous to use that as a reason to make escalation impossible.
And that effect is against custom ROMs and other kinds of user control.
It's web 2.0, user is a product.
Furthermore nothing prevents you from just taking pictures of the individual enrollment keys and printing those out either.
If you want TOTP 2FA that actually follows a one key per device policy you need to buy hardware tokens with some kind of out-of-band keying mechanism and enroll those. Then your problem changes from "how to stop people from copying my 2FA tokens" to "how to not get locked out of my account when my 2FA key device breaks."
They got tired of getting comments from mere web users that don't want this and locked down comments :P
You can demand change all you want but it doesn't change how the real world works. These people need to come off their high horse and come join the rest of us. So sick and tired of C-level people demanding shit they know nothing about.
Thankfully the web still is a very multiparty system, with various w3c group reviews & various implementer signals all being registered well ahead of time. Comments on blink-dev were strong & fast. Unlike almost every other system on the planet I think the mediation here is real & strong!
The attitude of the W3C was basically "we either kiss the ring or Hollywood forks us". So I can totally imagine Tim Berners-Lee spinning in his nonexistent grave then too. That doesn't mean he's Stallman levels of freedom-or-death.
[0] AFAIK, Google bought Widevine, Apple uses FairPlay, and Mozilla originally used Adobe but now uses Chrome's Widevine library.
this alternative will basically not exist for all intents and purposes if the "secure" version is the norm.
Let's take an existing example - why is there no such an alternative for home gaming consoles like Xbox or PS5?
It's "think of the children!" way of arguing for intrusions and surveillance.