I'm probably alone in this, but WEI is a good thing. Anyone who's run a site knows the headache around bots. Sites that don't care about bots can simply not use WEI. Of course, we know they will use it, because bots are a headache. Millions of engineer hours are wasted yearly on bot nonsense.
With the improvements in AI this was inevitable anyway. Anyone who thinks otherwise is delusional. Reap what you sow and what not.
edit: removing ssl comparison since it's not really my point to begin with
TLS does not facilitate preventing you as a web site visitor from inspecting or modifying the web content served over it, e.g. by blocking ads or auto-playing videos. WEI does.
A TLS client does not contain any trusted private key. You can write one yourself by reading the RFCs. The same is not true for WEI.
The other provides the website the ability to ensure that the user's device is one of an approved set of devices, with an approved set of operating system builds, with an approved set of browsers.
These are fundamentally different, surely you can see that.
> similarly both can be avoided if you're willing to not participate.
Actually, no. Unless your definition of "avoided" is simply not using a website which requires attestation, which, over time, could become most of them
Or they're just walled off from most of the web entirely.
I use a variety of personally developed web scraper scripts. For instance, I have digital copies of every paystub. These will almost all become worthless. My retirement plan at a previous employer would not let me download monthly statements unless I did it manually... it was able to detect the Mechanize library, and responded with some creepy-assed warning against robots.
No one would go to the trouble to do that manually every month, and no one was allowed robots apparently. But at least they needed to install some specialty software somewhere to disallow it. This shit will just make it even easier for the assholes.
I also worry about tools I sometimes use for things like Selenium.
This isn't SSL.
Are you sure you actually understand these two technologies (WEI and TLS) sufficiently to make these claims?
In WEI, the users (the ones being attested) _cannot_ avoid WEI. If a website decides to not allow an unattested user, they can simply decide to refuse access.
The answer to this one is that the fundamental problem that current TPMs aim to "solve" is that of allowing corporate control and inspection of end users' computers. To continue having a free society where individuals have some autonomy over the devices they purportedly own, this needs to be soundly rejected.
The EV certs still exists, but the browsers don't really differenciate between DV and EV certs anymore.
My problem isn't that I as a developer don't have an option to not implement attestation checks on my own web properties. I already know that (and definitely won't be implementing them).
My problem is that a huge number of websites will, ostensibly as an easier way to prevent malicious automation, spam etc, but in doing so will throw the baby out with the bathwater: That users will no longer have OS and browser choice because the web shackles them to approved, signed, and sealed hardware/software combinations primarily controlled by big tech.
In either case, WEI has the potential to be proper DRM, like in the “approved devices” fashion. It’s deeply invasive, and can be used to exclude any type of usage at the whim of mega corps, like screen readers, ad blocking, anti-tracking/fingerprinting, downloading copyrighted content, and anything new they can think of in the future. It’s quite literally the gateway to making the web an App Store (or at best, multiple app stores).
> What's the alternative solution?
To what problem? Bots specifically or humans who want to use the web in any way they want?
If bots, then elaborate. Many bots are good, and ironically the vast majority of bot traffic comes from the very corporations that are behind this stuff. As for the really bad bots, we have IP blocklisting. For the gray/manipulative bots, sure, that’s a problem. What makes you think that problem needs to be addressed with mandatory handcuffs for everyone else?
But TLS certificates solve a much narrower problem than WEI ("are you communicating with the site you think you are") and are widely and cheaply available from multiple organizationally independent certificate authorities.
In particular, TLS certificates don't try to make an assertion about the website visited, i.e. "this site is operated by honest people, not scammers". WEI does, with the assertion being something like "this browser will not allow injecting scripts or blocking elements".
This notion of destroying the open web is so nonsensical. WEI is not obligatory. If it's being implemented it's because it solves a real problem. Think about it. There will still be sites that don't use it.
People's real issue is that the big sites will use WEI because the problem it solves is legitimate but they don't want to identify themselves, which makes sense, but they were never obligated to let you visit their site to begin with.
This all seems to me that in a decade we'll be having the same discussion, with the same excuse, but eventually the proposal from big corporations will be to require plugging-in a government-issued ID card into a smartcard reader in order to access pre-approved websites with pre-approved client portals running in pre-approved machines.
Google can reduce the page rank of websites that dont enable it (or just not give any page rank at all) and now everyone who wants to be found has to enable it
Provenance to the extent it is a problem is already handleable and largely handled. Note that "handled" here does not mean it is 100% gone, only that it is contained. Monopolistic control over the web is not containable.
There are a number of sites I frequent but don't log in to or register for an account.
Every single one of them has an absurd number of captchas, or I see the cloudflare protection thing come up for first for 3 seconds.
So while hypothetically it may be true that they don't have to do it, they will. It's not even clear to me that Firefox could implement it too... so do I have to switch back to Chrome (or [barf] Safari?)? Dunno. I can't predict the future, but you'd have to be in some sort of denial to not see where this is going.
> At the end of the day bots are a real issue
Bots are fucking awesome. We should all have bots, out there doing the boring stuff, bringing back the goodies to us. If someone tells you that bots are bad, they're lying to you because they're afraid that you might find out how much you'd want one.
But I oppose others, Google/Microsoft/Facebook/..., attesting if my system is according their specifications
If anything you are just proving the point of the most paranoid.
I don't even have a strong opinion on this but it's so weird to see this argument over and over. It's just calling for even an even more extreme reaction to any effort that goes in this direction, just in case it's used to justify a push for even worse stuff down the line.
They're not. Depending on your competency, you have a _ton_ of tools at your disposal for filtering traffic ranging from basic throttle to sophisticated behavior/request profiling.
I've spent more than a little bit of my career dealing with bots and I'm not really sure that a cryptographically signed blob proving that the request came from $thisSpecificVersion of firefox running on $thisExactVersion of osx is really going to help me.
I don't care _what_ made the request because that can always be spoofed; this cat and mouse game always ends at the analog loop hole. I care about what the request(s) are trying to do and that is something that I figure out with just the data I have server side.
*exactly*. The analog loophole is where this cat/mouse game must end. Since we already know how it'll play out, can't we invest our time into more useful endeavors?
If it were impossible for a company to have such a high market share in all of these areas at once, this proposal would be much less concerning.
It'll end DDOS by botnet. Compromised computers would (presumably) have to run a full browser. That's much more computationally expensive and (presumably) the user would see it running.
And the flaw here is that the proposal doesn't do enough. If that signed blob allowed you to uniquely ID the device it would help solve a lot more problems. That would end DDOS for the most part and make managing abuse a lot easier.
But many here are (in my view rightly) arguing that this would be too high a price to pay for bot/spam protection, since it would almost inevitably cement the browser, OS, and device monoculture even further.
[1] https://www.cultofmac.com/311171/crazy-iphone-rig-shows-chin...
Also that you're talking about anti virus shows that you're not really in touch with the gamut of computing. From my perspective, anti virus was something that was relevant two decades ago.
I'm fine with attestation when it comes to high-risk tasks such as confirming financial transactions or signing legal documents, or anonymous "proof-of-humanity" solutions such as Apple's Private Access Tokens (as long as there's a CAPTCHA-based or similar alternative!) for free trials or account creations (beats using SMS/phone number authentication, at least), but applying Trusted Computing to the entire browser just goes much too far.
So is it a headache for all/most sites or is it not?
This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
> That would end DDOS for the most part and make managing abuse a lot easier.
Not every bot that I'm defending against is a DDoS but I can probably figure out a way to overwhelm the "pre-content" filter that's trying to figure out if a token is legit or not.
Not even remotely. This proposal is adding this attestation to one of the last network layers, most DDOS methods won't be touched by this.
A bot is just some computer doing what its owner wants. OP is happy because WEI will eliminate bots. OP is inconvenienced by other people using computers in ways they don't like, and wants to take control of the computer away.
As strong AI is knocking on the door, we see people wanting to take general purpose computing away. All the worst outcomes involve people losing the ability to control their own computers.
I wouldn't mind being able to use the TPM to tell me whether the hardware and software are what I expected them to be, but that's different.
That's true of every DDOS filter. It doesn't mean that having a cryptographically secure way to make requests more expensive to produce isn't a tremendous help.
>This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
The fingerprint isn't unique to the extent that you can rely on it always correctly identifying a single user. So you can't ban based on the fingerprint or automatically log someone in.
WEI does prevent any customization.
You have lost an acquaintance.
Which a lot of them already do: https://www.youtube.com/watch?v=hsCJU9djdIc
Or just use a botnet to steal use of someone else's hardware, which is also very common for malicious bots.
A malicious actor wouldn't bother. They'll tap `/dev/random` when it comes time to send the blessed token to the origin. The onus is going to be on the origin to figure out that it's _not_ a valid/signed token. If it's as easy for the origin to do this as it is for the adversary to tap a RNG then what was the point? If it's harder for the origin to figure out my token isn't legit than it was for me to generate one, how is the origin better off?
In any case, you're filtering the DDOS out *after* you've managed to set up the TCP/TLS/HTTP connection. That seems to be a rather late/expensive point to do so!
The people who want to use DRM to solve their problems should just suck it up and find alternatives.
If you put a capability in, people will use (and abuse) it.
Worse than that -- unless you disallow any sort of scripting and accessibility hooks, WEI doesn't prevent malicious requests. It just forces you to script your system via autohotkey or its equivalent.
You didn't finish your metaphor, let me.
I don't let anyone in my house, therefore what? Therefore I am joining a worldwide program whereby I am able to find out from a source I choose whether I want to let this person into my house. If they don't make their information available to my trusted source, they ain't getting in.
Also my house happens to contain things that billions of people want to see and use, but they have to sit through my time share pitch first. And they HAVE to listen.
> If it's being implemented it's because it solves a real problem.
If something solves a real problem, must it then be implemented?
Also, it solves a problem for web sites, and in such a way that non-malicious users will be less free to use the web the way they want.
People used browser APIs and some other people thought to take that away. When some people use autohotkey, what will the other people think about doing?
Or by isolating the browser from third party software. Android does not let applications mess with each other. Windows already prevents non-elevated applications from touching elevated applications (i.e. running as administrator).
What makes you think that Windows won't add an "untouchable" mode to executables belonging to "approved" browsers? The kernel is already locked down so you won't be able to bypass it that easily.
The goal of WEI is not to get rid of bots, that's just a bonus, it is to remove user control and customization over his own experience.
I mean.. I think you’re answering your own question here.
You can argue that the web shouldn’t be open. In fact, there are many arguments for that, which I don’t mind arguing against.
There are many things that do not belong on the web, precisely because it’s open. For instance, a registry of people’s political views. Or naked pictures you don’t want the world to retain forever. And so on. The fact that the (open) web is not suitable for everything has been true since its inception. Openness comes with trade offs.
The honest way of putting it, is that WEI wants to make the web less open so that it can have more content, or protect content better.
On an opt-in basis, this is fine in theory. But WEI would never ever be opt-in with meaningful consent. It’s entirely dead in the water there, because non techies will not understand what or why this is “needed”. Heck, people don’t even grok cookies. In practice, this will be enabled by default, which is exactly the fear. Alt browsers would be bullied to support it, and users would be forced to use it.
Because it's less computational intense than serving responses and/or trying to fingerprint malicious actors. It also tells you with near certainty that the request is malicious and future requests from that IP can be blocked.
Android lets applications mess with each other. That's how the accessibility features work.
So I can still use this to DDOS. My malware running somewhere on your network just needs to submit a bogus request from your IP address. Origin sees the bogus requests from your IP and now that IP is on the bad list. Later - your legit requests from the same IP are ... denied.
I don't know that an "inverse" DDOS is novel, but it's certainly not been common. Perhaps that may change in the future...