Unfortunately people who have rooted phones, who use nonstandard browsers are not more than 1% of users. It’s important that they exist, but the web is a massive platform. We can not let a tyranny of 1% of users steer the ship. The vast majority of users would benefit from this, if it really works.
However i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks. Unfortunate but overall beneficial to the masses.
Edit: Apparently 5% of the time it intentionally omits the result so it can’t be used to block clients. Very reasonable solution.
I don't think it does that. Nothing about this reduces the problem that captchas are attempting to solve.
> i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks.
That's not abusing this tool. That's the very thing that this is intended to allow.
Depends on what you count as "nonstandard", but various estimates put non-top 6 browser usage at between 3-12% (https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Su...) and non-Windows/macOS/iOS/Android usage at ~4% (https://en.wikipedia.org/wiki/Usage_share_of_operating_syste....) These also don't take into account traffic on older operating systems or hardware that would be incompatible with these attestations, or clients that spoof their user agent for anonymity.
In an ideal world, we would see this number grow, not shrink. It's not good for consumers if our choices dwindle to just one or two options.
> We can not let a tyranny of 1% of users steer the ship.
Far less than 1% of my users use the accessibility features. In fact, it is closer to 1% of 1%. Does that justify the far, far easier development and bug testing that I would enjoy if I were to stop providing accessibility features?* Allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.
* Offer an adversarially robust and long-term sustainable anti-abuse solution.
* Don't enable new cross-site user tracking capabilities through attestation. Continue to allow web browsers to browse the Web without attestation.
From: https://github.com/RupertBenWiser/Web-Environment-Integrity/...
If it actually won't do any of those things, then that should be debated first.
I honestly find it more concerning when I’m expecting one and I don’t get served a ridiculous puzzle to solve.
Normally I'd agree with you on that the tyranny of the minority is a bad thing, but sometimes the minority actually has a point and this is one of the cases where the minority is _objectively_ correct and letting the majority decide would end up in a complete dystopia. Democracy only works if everyone is informed (and able to think logically/critically, not influenced (either by force or by salary), etc.) and in this case the 99% simply do not have any clue on the effects of this being implemented (nor do they care). This entire proposal is pure orwellian shit.
I also don't understand how WEI does much to prevent a motivated user from faking requests. If you have Chrome running on your machine it's not gonna be too hard to extract a signed WEI token from its execution, one way or another, and pass that along with your Python script.
It looks like it basically gives Google another tool to constrain users' choices.
I'm guessing the reason we want attestation is so that Chrome can drop ad blockers and websites can drop non-Chrome browsers. But there is no reason why you can't do the thing where you point a video camera at a monitor, have AI black out the ads, and then view the edited video feed instead of the real one.
The only use for attestation I see is for work-from-home corporate Intranets. Sure, make sure that OS is up to date before you're willing to send High-Value Intellectual Property to the laptop. That... already works and doesn't involve web standards. (At my current job, I'm in the hilarious position where all of our source code is open-source and anyone on Earth can edit it, but I have to use a trusted computer to do things like anti-discrimination training. It's like opsec backwards. But, the attestation works fine, no new tech needed.)
And I will bet anything that if the browser is being instrumented via webdriver it will attest as such. You would have to automate the browser externally.
But the power is too significant. If it were some small subset of positive assertions I'd be ok with this, but the ability to perform arbitrary attestation is beyond what is required and is far too abusable.
Is this truely going to work though? Captcha provider already monitor mouse and keyboard movement while on the page. Can you really "synthesize" human-like mouse movements around the page? I'm not so sure.
If you can still run extensions you still need captchas. So one possible road this takes is Google launches it, everybody still uses captchas because extensions in desktop browsers still make automating requests trivial -- and then we lock down extensions because "we already locked down the hardware and we really do need to do something about captchas..."
I can't literally emulate mouse movements but the only place that matters is... captchas. If you're not watching for those kinds of behaviors, then a browser even without webdriver can be automated just fine. And if you are watching for those behaviors, then you're running a captcha, so what is WEI helping with?
Google claims this is not going to impact browser extensions, debugging, etc... but if it's not going to impact that stuff, then it's not really helpful for guaranteeing that the user isn't automating requests. What it is helpful for is reducing user freedom around their OS/hardware and setting the stage for attacking extensions like adblockers more directly in the future.
Yes. It's not even very hard.
Also turn on a VPN some time (a signal to Google et al. that you're trying to bypass content region-restrictions, or funnel mobile traffic through an ad-blocker) and you are basically guaranteed to see nothing but CAPTCHAs from the predominantly CloudFlare owned and operated Internet.
So yes, it's a big problem, but only if your web environment (tracking metadata) are not sufficiently "trusted" :D
* The device integrity verdict must be low entropy, but what granularity of verdicts should we allow? Including more information in the verdict will cover a wider range of use cases without locking out older devices.
* A granular approach proved useful previously in the Play Integrity API.
* The platform identity of the application that requested the attestation, like com.chrome.beta, org.mozilla.firefox, or com.apple.mobilesafari.
* Some indicator enabling rate limiting against a physical device> BezMouse is a lightweight tool written in Python to simulate human-like mouse movements with Bézier curves. Some applications might include:
> BezMouse was originally written for a RuneScape color bot and has never triggered macro detection in over 400 hours of continuous use.
:)
You're behind the times. It's not widespread but it's been happening for years.
Also the other day selenium author (iirc) said they are working on such a thing for "automated testing"
So this proposal will do nothing to prevent bots; maybe increase the cost a little.
On the other hand, it will surely discriminate people, new emerging technology and companies. No other search engines can be built. No new browsers. No openness.
Anyone supporting this proposal is either pure evil or stupid or both.