Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
https://blog.trezor.io/why-you-should-never-use-google-authe...
Edit - sorry that this is really an ad for the writer's products. On the other hand, there's a hell of a bounty for proving them insecure / untrustworthy, whatever your feelings on "the other crypto".
I use LineageOS on my phone, and do not have Google Play Services installed. The phone only meaningfully interacts with a very few and most basic Google services, like an HTTP server for captive portal detection on Wifi networks, an NTP server for setting the clock, etc. All other "high-level" services that I am aware of, like Mail, Calendaring, Contacts, Phone, Instant Messaging, etc., are either provided by other parties that I feel more comfortable with, or that I actually host myself.
Now let's assume that I would want or have to do online/mobile banking on my phone - that will generally only work with the proprietary app my bank provides me with. Even if I choose to install their unmodified APK, (any lack of) SafetyNet will not attest my LineageOS-powered phone as "kosher" (or "safe and secure", or "healthy", or whatever Google prefers calling it these days), and might refuse to work. As a consequence, I'm effectively unable to interact via the remote service provided by my bank, because they believe they've got to protect me from the OS/firmware build that I personally chose to use.
Sure, "just access their website via the browser, and do your banking on their website instead!", you might say, and you'd be right for now. But with remote attestation broadly available, what prevents anyone from also using that for the browser app on my phone, esp. since browser security is deemed so critical these days? I happen to use Firefox from F-Droid, and I doubt any hypothetical future SafetyNet attestation routine will have it pass with the same flying colors that Google's own Chrome from the Play Store would. I'm also certain that "Honest c0l0's Own Build of Firefox for Android" wouldn't get the SafetyNet seal of approval either, and with that I'd be effectively shut off from interacting with my bank account from my mobile phone altogether. The only option I'd have is to revert back to a "trusted", "healthy" phone with a manufacturer-provided bootloader, firmware image, and the mandatory selection of factory-installed, non-removable crapware that I am never going to use and/or (personally) trust that's probably exfiltrating my personal data to some unknown third parties, sanctified by some few hundreds of pages of EULA and "Privacy" Policy.
With app stores on all mainstream and commercially successful desktop OSes, the recent Windows 11 "security and safety"-related "advances" Microsoft introduced by (as of today, apparently still mildly) requiring TPM support, and supplying manufacturers with "secure enclave"-style add-on chips of their own design ("Pluton", see https://www.techradar.com/news/microsofts-new-security-chip-...), I can see this happening to desktop computing as well. Then I can probably still compile all the software I want on my admittedly fringe GNU/Linux system (or let the Debian project compile it for me), but it won't matter much - because any interaction with the "real" part of the world online that isn't made by and for software freedom enthusiasts/zealots will refuse to interact with the non-allowlisted software builds on my machine.
It's going to be the future NoTCPA et al. used to combat in the early 00s, and I really do dread it.
> As I understand it, this sentence says that the application should be safe even if it was exposed to the public internet, not that it needs to be exposed.
It has been a long, slow but steady march in this direction for a while [1]. Eventually we will also bind all network traffic to the individual human(s) responsible. 'Unlicensed' computers will be relics of the past.
> Verifiers SHOULD NOT impose other composition rules (mixtures of different character types, for example) on memorized secrets
Earliest draft in Wayback Machine, dated June 2016. Lots of other good stuff from 800-63 dates back this early too.
https://web.archive.org/web/20160624033024/https://pages.nis...
Nobody cares. It just gets postponed forever.
https://reproducible-builds.org/
Agreed that people should have the freedom to modify their software though.
https://www.theverge.com/2022/1/26/22903437/id-me-facial-rec...
https://developer.android.com/training/safetynet/attestation
https://stats.dnssec-tools.org/images/domains.svg
It's deployed on many more domains than MTA-STS.
Encrypted messaging has been a complete failure; there is no need to single out email. I suspect the reason is more or less the same in all cases. Users have not been provided with a conceptual framework that would allow them to use the tools in a reasonable way. If the US federal government can come up with, and promote such a framework the world would become a different place.
BTW, the linked article is mostly based on misconceptions:
If someone just put a fake domain that proxies everything between you and the server (with fake domain with HTTPS... which he social engineered you to get on)
Looks like FIDO2 2FA only sign the challenge response against the server certificate available locally (= the phishing domain) so just passing it to the original server will fail. Also, the attacker can't just re-sign the challenge response after you, because the challenge was sent from the original server already encrypted with the public key of the user (stored from the registration phase). So only the registered user can see the challenge and respond to it.
This leaves only 2 options to do a phishing attack: 1) Get a valid certificate for the original domain [1] 2) force downgrade the user to old TOTP [2]
You only get effective end to end encryption if you can verify that you are talking to who you think you are talking to. Otherwise the people that are running the system can cause your messages to take an unencrypted detour and thus be able to read them. This is often called a man in the middle attack. Verifying identities normally means checking some sort of long identity number. Very few people know how to do that in an effective way.
For example: in a usability study involving Signal[1], 21 out of 28 computer science students failed to establish and maintain a secure end to end encrypted connection. The usability of end to end encrypted messaging is a serious issue. We should not kid ourselves into thinking it is a solved issue.
PGP in a sense is actually better here in that it forces the user to comprehend the existence of a key in a way where it is intuitively obvious that it is important to know where that key came from.
[1] https://www.ndss-symposium.org/wp-content/uploads/2018/03/09...
[1] https://www.usenix.org/legacy/events/sec99/full_papers/whitt...
[1] https://people.eecs.berkeley.edu/~tygar/papers/Why_Johnny_Ca...