This notion of destroying the open web is so nonsensical. WEI is not obligatory. If it's being implemented it's because it solves a real problem. Think about it. There will still be sites that don't use it.
People's real issue is that the big sites will use WEI because the problem it solves is legitimate but they don't want to identify themselves, which makes sense, but they were never obligated to let you visit their site to begin with.
They're not. Depending on your competency, you have a _ton_ of tools at your disposal for filtering traffic ranging from basic throttle to sophisticated behavior/request profiling.
I've spent more than a little bit of my career dealing with bots and I'm not really sure that a cryptographically signed blob proving that the request came from $thisSpecificVersion of firefox running on $thisExactVersion of osx is really going to help me.
I don't care _what_ made the request because that can always be spoofed; this cat and mouse game always ends at the analog loop hole. I care about what the request(s) are trying to do and that is something that I figure out with just the data I have server side.
If it were impossible for a company to have such a high market share in all of these areas at once, this proposal would be much less concerning.
It'll end DDOS by botnet. Compromised computers would (presumably) have to run a full browser. That's much more computationally expensive and (presumably) the user would see it running.
And the flaw here is that the proposal doesn't do enough. If that signed blob allowed you to uniquely ID the device it would help solve a lot more problems. That would end DDOS for the most part and make managing abuse a lot easier.
This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
> That would end DDOS for the most part and make managing abuse a lot easier.
Not every bot that I'm defending against is a DDoS but I can probably figure out a way to overwhelm the "pre-content" filter that's trying to figure out if a token is legit or not.
Not even remotely. This proposal is adding this attestation to one of the last network layers, most DDOS methods won't be touched by this.
That's true of every DDOS filter. It doesn't mean that having a cryptographically secure way to make requests more expensive to produce isn't a tremendous help.
>This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
The fingerprint isn't unique to the extent that you can rely on it always correctly identifying a single user. So you can't ban based on the fingerprint or automatically log someone in.
A malicious actor wouldn't bother. They'll tap `/dev/random` when it comes time to send the blessed token to the origin. The onus is going to be on the origin to figure out that it's _not_ a valid/signed token. If it's as easy for the origin to do this as it is for the adversary to tap a RNG then what was the point? If it's harder for the origin to figure out my token isn't legit than it was for me to generate one, how is the origin better off?
In any case, you're filtering the DDOS out *after* you've managed to set up the TCP/TLS/HTTP connection. That seems to be a rather late/expensive point to do so!
You didn't finish your metaphor, let me.
I don't let anyone in my house, therefore what? Therefore I am joining a worldwide program whereby I am able to find out from a source I choose whether I want to let this person into my house. If they don't make their information available to my trusted source, they ain't getting in.
Also my house happens to contain things that billions of people want to see and use, but they have to sit through my time share pitch first. And they HAVE to listen.
> If it's being implemented it's because it solves a real problem.
If something solves a real problem, must it then be implemented?
Also, it solves a problem for web sites, and in such a way that non-malicious users will be less free to use the web the way they want.
I mean.. I think you’re answering your own question here.
You can argue that the web shouldn’t be open. In fact, there are many arguments for that, which I don’t mind arguing against.
There are many things that do not belong on the web, precisely because it’s open. For instance, a registry of people’s political views. Or naked pictures you don’t want the world to retain forever. And so on. The fact that the (open) web is not suitable for everything has been true since its inception. Openness comes with trade offs.
The honest way of putting it, is that WEI wants to make the web less open so that it can have more content, or protect content better.
On an opt-in basis, this is fine in theory. But WEI would never ever be opt-in with meaningful consent. It’s entirely dead in the water there, because non techies will not understand what or why this is “needed”. Heck, people don’t even grok cookies. In practice, this will be enabled by default, which is exactly the fear. Alt browsers would be bullied to support it, and users would be forced to use it.
Because it's less computational intense than serving responses and/or trying to fingerprint malicious actors. It also tells you with near certainty that the request is malicious and future requests from that IP can be blocked.
So I can still use this to DDOS. My malware running somewhere on your network just needs to submit a bogus request from your IP address. Origin sees the bogus requests from your IP and now that IP is on the bad list. Later - your legit requests from the same IP are ... denied.
I don't know that an "inverse" DDOS is novel, but it's certainly not been common. Perhaps that may change in the future...