bool Quirks::requiresUserGestureToPauseInPictureInPicture() const
{
#if ENABLE(VIDEO_PRESENTATION_MODE)
// Facebook, Twitter, and Reddit will naively pause a <video> element that has scrolled out of the viewport,
// regardless of whether that element is currently in PiP mode.
// We should remove the quirk once <rdar://problem/67273166>, <rdar://problem/73369869>, and <rdar://problem/80645747> have been fixed.
if (!needsQuirks())
return false;
if (!m_requiresUserGestureToPauseInPictureInPicture) {
auto domain = RegistrableDomain(m_document->topDocument().url()).string();
m_requiresUserGestureToPauseInPictureInPicture = domain == "facebook.com"_s || domain == "twitter.com"_s || domain == "reddit.com"_s;
}
return *m_requiresUserGestureToPauseInPictureInPicture;
#else
return false;
#endif
}Which even just remembering now gave me some small terror
shouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreasForHost(...)
I'm a developer on a web media player and remember that we had at some point an issue with picture in picture mode: We had a feature that lowered the video bitrate to the minimum when the page that contained the video element was not showing for some amount of time.
That made total sense... until the Picture in Picture mode was added to browsers, where you would see a very-low quality after watching your content in that mode in another page long enough (~1 minute).
The sad thing is that because I was (still am) developing an open-source player and because the API documentation described clearly the aforementioned implementation, I had to deprecate that option and re-create a new one (with a better API documentation, talking more about the intent than the implementation!) instead, which would have the right exceptions for picture in picture mode.
Seeing that part made me remember this anecdote, we should just have asked for quirks :p
Apple-style function/method naming may be verbose but at least the names usually have good explanatory value and aren't just meaningless verbosity.
"Site-specific hacks break native video controls in YouTube embeds" https://bugs.webkit.org/show_bug.cgi?id=245612
So many hours spent debugging this...
Java:
isShouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreasForHost
Common Lisp:
suppress-autocorrection-and-autocaptialization-in-hidden-editable-areas-for-host-p
C:
saaainheafh
When you’re the default browser on 1.5+ billion devices, it’s less about losing a few users here or there.
Also, autocomplete doesn't work when you're reading and not writing, or just using a text editor or reading/annotating a printout (yes, I still do that.) IMHO writing code that almost completely relies on special tools to handle it is a bad trend.
They also glaze past the end of the mega-strings, but I usually solve that problem by either actually ctrl-F finding for the string (which will highlight it) or by finding an examplar and selecting it (which will highlight all instances of the same symbol in the languages and IDEs I use).
I'm a firm believer that "code isn't just text" (in fact, most of my frustration with code is tools that treat it so... The set of strings that aren't valid programs is vastly larger than the set that are, so why should I be treating programs as if they're mere strings? It'll lead me to create non-compilable artifacts). So I try to avoid being in situations where the only tools I have to work with to understand code are a text editor or an annotated printout (I don't doubt that's done in places, but I've gone my whole career managing to avoid it so far).
Users can still install Chrome on a Macintosh. Apple's larger concern is probably whether they'll lose hardware share if, say, Reddit doesn't load right on MacBooks (you'd be surprised how many people buy an expensive machine as basically an internet appliance) or, more importantly: iPhones.
https://github.com/WebKit/WebKit/blob/f134a54c03b71e8e3c4da0...
https://github.com/WebKit/WebKit/blob/f134a54c03b71e8e3c4da0...
Also not as disgusting as Quirks.cpp, but I was debugging some video decoding stuff in Chrome this week and found some fun things today, special code to work around various GPU driver bugs.
https://source.chromium.org/chromium/chromium/src/+/main:gpu...
https://source.chromium.org/chromium/chromium/src/+/main:gpu...
And a separate implementation of MSAA for Intel GPUs
https://source.chromium.org/chromium/chromium/src/+/main:gpu...
…but now, thanks to this file, it may be difficult if not impossible to fix their damn site in a way that doesn't conflict with WebKit's own fixes. :/
Just kidding. Mostly. Lots of little fiddly text editing patches for Google Docs, which doesn’t surprise me. It’s, I would assume, the most-used application suite on the web, and there’s going to be weird edge cases that pop up around rich text editing that get magnified by the sheer number of users. Including probably inside Apple.
> Quirks::shouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreas
Even with modern IDE's with autocomplete, I wonder if such long names for variables, methods, classes etc. should be encouraged?
It's an acquired taste.
// On Google Maps, we want to limit simulated mouse events to dragging the little man that allows entering into Street View.
looolI don’t think so.
Apple has had record Mac sales for the past two years, since the release of Apple Silicon.
There’s no way wether or not any one particular website renders as expected is going to meaningfully impact Mac or iPhone sales.
Sure, Apple would prefer that users stick with Safari but they’re not going to lose any sleep if customers use Chrome/Firefox/Edge or whatever else on a new $2499 MacBook Pro.
It’s certainly way more in Reddit’s or other big web property best interest to perform well for Safari users, especially on iPhones and iPads where all browsers use WebKit as their rendering engine.
Same here. The difference isn't significant enough for my eyes to latch onto.
If I had to come up with short readable function names, I would still use whole words, but would 1) cut the number of words down to the bare minimum 2) make each name as unique as practically possible.
What if instead of browsers and ad blockers we had an extensive collection of web scrapers for every web site out there?
I once received a bug report on a site that consistently went down after a computer woke from sleep... But only if the computer was a macintosh, and only if the browser was Chrome. It turned out that the root cause was that when the machine slept and reawoke, XML HTTP requests that were attached to timers in an open webpage would fire all at once.
On Windows and Linux, apparently, the network stack would dutifully pause those requests while the radio took a moment to reestablish connection. Mac OS x, adhering to the spec, did not pause but instead immediately reported on wake that the network was unavailable.
So, the other browsers on Mac OS wisely broke speck and ignored the first couple network down that came in after sleep, quietly retrying rpcs. Chrome adhered to spec and dutifully reported the dropped network as an error that failed all those rpcs.
As a result, client's page was broken, but only on Mac os, only on wi-fi, and only on Chrome. Would you guess that their first solution was to painstakingly rewrite all of their set timeout logic to move the retries up to the JavaScript layer, or would you guess that their first solution was to report a bug to Google and tell their regular users Chrome was broken?
In any case, it's a moot point now because at some point Chrome changed their network stack implementation to match everybody else's. ;)
The “Google” network and sites can be kept on as a necessary evil proprietary service, like Facebook is for many, and also LinkedIn.
Web developers, ultimately, have very little vested interest in what browser is winning or who's using what as long as (a) people can access their site and (b) they don't have to write the site twice. That's their incentive model. Telling them that the spec is X and if Google does Y Google is wrong when Google is like 90% market share is just kind of a funny idea for them to laugh at and then go right back to solving the problem in a way that reaches 90% of the possible users (and then maybe, time permitting, writing pieces of the site twice to pick up a fraction of the remaining 10%).
As an exercise: what would you name it that’s shorter?
I’m having trouble thinking of anything that doesn’t make it seriously compromise the clarity unless you had a lot of autocorrect and autocapitalization problems and could shorted that part to AandA.
The only other option, which I do t really like, is to strip all the info and call it something like isBug315255() and put a comment in the function explaining it. But that’s a big loss in my eyes.
Would be cool if the Tor network filled that role.
Yeah, of course. It's only the platform they depend on. Why not cede control of it to Google, right? What's the worst that could happen?
Sometimes I ask myself why people even try. What is the point when people have such an apathetic attitude? What is the point of these web standards? Some huge company comes in, dominates the market and suddenly they're the standard. Nobody cares as long as they're making money, even though the huge company is usurping control of the platform. Not even a year ago I saw a post here about people at Google talking about moving the web away from the previous "owned" model to a "managed" model or something like that. As long as people don't have to work too hard to get paid, who cares, right? This notion of an open platform is just a funny idea to laugh at.
I've been wondering that that ever since the Microsoft antitrust suit led to the dominance of Chrome.
And if it is not bugfixes, but keeping around features that were removed: if youtube.com uses something, why can't everybody else use it?
It is however absolutely the behavior of web developers, that's why the web used to be filled with IE only sites, and why we are now getting chrome only sites. It is much easier to blame other engines that ask whether your site is depending on implementation details.
The TLDR for this is: the modern web is very well specified, and all browser engines work very hard to conform to those specs, and now when divergent behavior is found the erroneous engine is corrected, or if the specification itself was incomplete considerable effort is expended making sure it is complete so that it is possible to ensure conforming implementations. The driving force for this was engine developers, not site developers.
Anyway.
The entire point of the html5 and subsequent "living" spec, and the death of ES4 and subsequent ES3.1 and ES5 specs, and then ES's "living spec" was dealing with the carnage of the early web and the Netscape vs IE insanity it produced. This was a huge amount of effort driven almost entirely by the engine developers, specifically so that the specs could actually be used to implement browser engines. The existing W3C and ECMA specifications were useless as they frequently did not match reality, where they did match reality they had gaps where things were not specified, and frequently they simply did not acknowledge features existed.
It took a huge amount of effort to determine the exact specification for parsing html, such that it could be adopted without breaking things. It took a huge amount of effort to go through the DOM APIs, the node traversal, event propagation, and on and on to specify them.
The same thing happened with ecmascript. A lot of effort for many years was spent replacing the existing spec, ignoring a bunch of time wasted by some parts of the committee creating ES4, making it so that the ecmascript specification actually matched reality.
There were places where we found that there were mutually incompatible behaviors between Gecko and Trident, but in most cases we were able to replace old badly written specs, with real specifications that were compatible with reality, and were sufficiently detailed that they could be used to implement a new engine, and be confident that their engine would actually be usable.
The immense work required for this also means that the spec authors and committees are acutely aware of the need for exact and precise specification of new features. So it is expected that new specifications completely specify all behavior.
As an example, I recall that after originally implementing support for IMEs in webkit on windows, I spent weeks stepping through how key down, up, and press events were fired in the DOM when a user was typing with an IME. The spec at that point failed to say what events should be fired in that case - text entry is not keydown/press/up once IMEs are involved, do not assume one keyup will result in a single character change - it was a months long effort to get to something that only managed to specify keydown/up/press, none of the actual complexity of IMEs. The specification has since expanded to be more capable of handling IMEs, but they have an example of what the "keys typed by a user" vs "key events you receive" [1], and alas my work is now largely "legacy" :D [2]
The problem as ever, is that it is very easy for web developers to rely on some implementation detail that a specification failed to dictate, and then say any browser engine that does not behave identically is wrong. This is what webdevs did with IE, and now it's what webdevs do with Chrome. It is always easier for a webdev to paste "this site requires ie/chrome" than to work out if what they're doing is actually specified behavior. Sure, it's possible it's a bug in the other engine, but if you are saying "install chrome" you're saying it doesn't work in Gecko or WebKit, so it's much more likely to be a site bug.
... But we both arrive at the same place, where if a site works on incumbent browsers (by spec or by shared quirk because in an ambiguity of the spec everyone lucked into the same implementated behavior) and not an outlier browser, and the site is popular, users perceive the outlier to be broken. Because users don't understand this problem space by parsing specs; they understand it as "well it works on my sister's computer when she double-clicks the rainbow circle; I guess the compass just doesn't work with the whole web. Maybe I should just get myself a rainbow circle." Hence, the existence of a Quirks.cpp file.
The conditionals in the full check seem to use symbolised strings where possible, so they're quite fast. Probably faster than producing a suitable hash.
The difference between the past and now, is that it is very well understood by all the major vendors (gecko, blink, webkit) that if the spec has a section where different behaviour is permitted - other than things that are necessarily non-deterministic (networking, timers within some bound, ...), or where platform behavior is different as with IMEs - the specification itself is broken. Similarly if the spec disagrees what browser engines are actually doing, the spec is wrong. Once an issue is identified, the spec is then fixed, regardless of effort, to ensure that the gaps are filled out and the errors corrected.
The point is that if a new browser comes along, and correctly implements the spec, that browser should work with the same content as any other engine, and if it can't the spec is broken. This is the model the engine developers want. Yes it may cause them to face new competition, but having a complete spec is a massive enabler. It lets you make massive internal changes to your engine without having to worry about "are there sites that depend on X"[1].
Now, even when there are gaps or errors in the spec such that observable implementation details leak out, if a developers site only works in one browser (sans browser bugs, which please file bugs, the engineers at all these companies do care, and do value them) that site is depending on unspecified behavior, so the site is wrong.
From an end-users point of view, yes it appears that other browsers are broken, but that isn't the problem.
The problem is that the web developer turns around and says "it's not my site that is broken, it's the other browsers". This is the development model that means fundamentally Chrome is the new IE: it is the only browser that is resulting in sites saying "you need Chrome to continue" or developers saying "it works in Chrome but not X, X must be wrong" and not considering any alternative.
[1] Obviously there are quirks as listed in this file, but you can see that the size of this file and a couple of similar ones are very small, and the quirks are exceptionally specific, essentially they are as close as possible to "Apply this quirk, to this site, only if the site is still using this specific design/layout". In the past engines essentially had to go "we've got one site depending on this behavior, which has no real specification so we need to guess whether this is the actual behavior we should have, or whether it's uncommon, or even just a one off".
(If one wants to do that road, one should probably start reasoning from the "killer app" of a novel network model. The killer app of the web was HTML, and specifically the hyperlink combined with the URL, which allowed for association of information in a way that hadn't been possible before. It'll be hard to one-up that, but if someone could find a way to do it that would be hard for HTML to just grow to consume, there may be room for a novel information service).
I guess another issue is that most of these have actual logic in C++, so you'd have to move some of this code to JS (I believe Firefox does that (the JS part, not the sending code part)). Sending code is a bit of a security concern though.