What if instead of browsers and ad blockers we had an extensive collection of web scrapers for every web site out there?
I once received a bug report on a site that consistently went down after a computer woke from sleep... But only if the computer was a macintosh, and only if the browser was Chrome. It turned out that the root cause was that when the machine slept and reawoke, XML HTTP requests that were attached to timers in an open webpage would fire all at once.
On Windows and Linux, apparently, the network stack would dutifully pause those requests while the radio took a moment to reestablish connection. Mac OS x, adhering to the spec, did not pause but instead immediately reported on wake that the network was unavailable.
So, the other browsers on Mac OS wisely broke speck and ignored the first couple network down that came in after sleep, quietly retrying rpcs. Chrome adhered to spec and dutifully reported the dropped network as an error that failed all those rpcs.
As a result, client's page was broken, but only on Mac os, only on wi-fi, and only on Chrome. Would you guess that their first solution was to painstakingly rewrite all of their set timeout logic to move the retries up to the JavaScript layer, or would you guess that their first solution was to report a bug to Google and tell their regular users Chrome was broken?
In any case, it's a moot point now because at some point Chrome changed their network stack implementation to match everybody else's. ;)
The “Google” network and sites can be kept on as a necessary evil proprietary service, like Facebook is for many, and also LinkedIn.
Web developers, ultimately, have very little vested interest in what browser is winning or who's using what as long as (a) people can access their site and (b) they don't have to write the site twice. That's their incentive model. Telling them that the spec is X and if Google does Y Google is wrong when Google is like 90% market share is just kind of a funny idea for them to laugh at and then go right back to solving the problem in a way that reaches 90% of the possible users (and then maybe, time permitting, writing pieces of the site twice to pick up a fraction of the remaining 10%).
Would be cool if the Tor network filled that role.
Yeah, of course. It's only the platform they depend on. Why not cede control of it to Google, right? What's the worst that could happen?
Sometimes I ask myself why people even try. What is the point when people have such an apathetic attitude? What is the point of these web standards? Some huge company comes in, dominates the market and suddenly they're the standard. Nobody cares as long as they're making money, even though the huge company is usurping control of the platform. Not even a year ago I saw a post here about people at Google talking about moving the web away from the previous "owned" model to a "managed" model or something like that. As long as people don't have to work too hard to get paid, who cares, right? This notion of an open platform is just a funny idea to laugh at.
I've been wondering that that ever since the Microsoft antitrust suit led to the dominance of Chrome.
(If one wants to do that road, one should probably start reasoning from the "killer app" of a novel network model. The killer app of the web was HTML, and specifically the hyperlink combined with the URL, which allowed for association of information in a way that hadn't been possible before. It'll be hard to one-up that, but if someone could find a way to do it that would be hard for HTML to just grow to consume, there may be room for a novel information service).