1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.
They even call out the fact that it's a proven bad practice that leads to weaker passwords - and such policies must be gone from government systems in 1 year from publication of the memo. It's delightful.
Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
https://blog.trezor.io/why-you-should-never-use-google-authe...
Edit - sorry that this is really an ad for the writer's products. On the other hand, there's a hell of a bounty for proving them insecure / untrustworthy, whatever your feelings on "the other crypto".
Detecting changes — and enforcing escalation in that case — can be enough, e.g. "You always uses Safari on macOS to connect to this restricted service, but now you are using Edge on Windows? Weird. Let's send an email to a relevant person / ask for a MFA confirmation or whatever."
Secure attestation about device state requires something akin to Secure Boot (with a TPM), and in the context of a BYOD environment precludes the device owner having full control of their own hardware. Obviously this is not an issue if the organization only permits access to its services from devices it owns, but no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.
Yes, but for government software this is a bog-standard approach. Not even "the source code is publicly viewable to everyone" is sufficient scrutiny to pass government security muster; specific code is what gets cleared, and modifications to that code must also be cleared.
It's even worse with texted codes because it's inherently credible in the moment because the message knows something you feel it shouldn't --- that you just got a 2FA code. You have to deeply understand how authentication systems work to catch why the message is suspicious.
You can't fix the problem with user education, because interacting with your application is almost always less than 1% of the mental energy your users spend doing their job, and they're simply not going to pay attention.
(Graphical keyboards are an old technique to try to defeat key loggers. A frequent side effect of a site using a graphical keyboard is that the developer has to make the password input field un-editable directly, which prevents password managers from working, unless you use a user script to make the field editable again.)
I use LineageOS on my phone, and do not have Google Play Services installed. The phone only meaningfully interacts with a very few and most basic Google services, like an HTTP server for captive portal detection on Wifi networks, an NTP server for setting the clock, etc. All other "high-level" services that I am aware of, like Mail, Calendaring, Contacts, Phone, Instant Messaging, etc., are either provided by other parties that I feel more comfortable with, or that I actually host myself.
Now let's assume that I would want or have to do online/mobile banking on my phone - that will generally only work with the proprietary app my bank provides me with. Even if I choose to install their unmodified APK, (any lack of) SafetyNet will not attest my LineageOS-powered phone as "kosher" (or "safe and secure", or "healthy", or whatever Google prefers calling it these days), and might refuse to work. As a consequence, I'm effectively unable to interact via the remote service provided by my bank, because they believe they've got to protect me from the OS/firmware build that I personally chose to use.
Sure, "just access their website via the browser, and do your banking on their website instead!", you might say, and you'd be right for now. But with remote attestation broadly available, what prevents anyone from also using that for the browser app on my phone, esp. since browser security is deemed so critical these days? I happen to use Firefox from F-Droid, and I doubt any hypothetical future SafetyNet attestation routine will have it pass with the same flying colors that Google's own Chrome from the Play Store would. I'm also certain that "Honest c0l0's Own Build of Firefox for Android" wouldn't get the SafetyNet seal of approval either, and with that I'd be effectively shut off from interacting with my bank account from my mobile phone altogether. The only option I'd have is to revert back to a "trusted", "healthy" phone with a manufacturer-provided bootloader, firmware image, and the mandatory selection of factory-installed, non-removable crapware that I am never going to use and/or (personally) trust that's probably exfiltrating my personal data to some unknown third parties, sanctified by some few hundreds of pages of EULA and "Privacy" Policy.
With app stores on all mainstream and commercially successful desktop OSes, the recent Windows 11 "security and safety"-related "advances" Microsoft introduced by (as of today, apparently still mildly) requiring TPM support, and supplying manufacturers with "secure enclave"-style add-on chips of their own design ("Pluton", see https://www.techradar.com/news/microsofts-new-security-chip-...), I can see this happening to desktop computing as well. Then I can probably still compile all the software I want on my admittedly fringe GNU/Linux system (or let the Debian project compile it for me), but it won't matter much - because any interaction with the "real" part of the world online that isn't made by and for software freedom enthusiasts/zealots will refuse to interact with the non-allowlisted software builds on my machine.
It's going to be the future NoTCPA et al. used to combat in the early 00s, and I really do dread it.
But maybe they didn't bother giving much more effort to better passwords because they really don't want those to stick around at all and good for them. Password managers themselves are a bandaid on the fundamentally bad practice of using a symmetric factor for authentication.
It seems like the sensible rule of thumb is: If your organization needs that level of control, it's on your organization to provide the device.
> I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.
Is that really a problem? In practice wouldn't it just mean you can only use employer-provided and certified devices? If they want to provide their employees some Free Software-based client system, that configuration would be on the whitelist.
SMS are bad due to MITM and SIM cloning. In EU many banks still use smsTAN, and it leads to lots of security breaches. It's frustrating some don't offer any alternatives.
However, is FIDO2 better than chipTAN or similar? I like simple airgapped 2FAs, but I'm not an expert.
In the case of client attestation, this is how we get "Let Google/Apple/Microsoft handle that, and use what they produce."
And as a end state, leads to a world where large, for-profit companies provide the only whitelisted solutions, because they're the largest user bases and offer a turn-key feature, and the market doesn't want to do addition custom work to support alternatives.
But I think the point of your parent comment's reply was that the inevitable adoption of this same techonology in the consumer-level environment is a bad thing. Among other things, it will allow big tech companies to have an stronger grip on what software/platforms are OK to use/not use.
If your employer forces you to, say, only use a certain version of Windows as your OS in order to do your job, that's generally acceptable to most people.
But if your TV streaming provider tells you have to use a certain version of Windows to consume their product, that's not considered acceptable to a good deal of people.
It has been a long, slow but steady march in this direction for a while [1]. Eventually we will also bind all network traffic to the individual human(s) responsible. 'Unlicensed' computers will be relics of the past.
Banks aren't going to want to implement any changes that cost more (in system changes and customer support) than the fraud they prevent.
> Verifiers SHOULD NOT impose other composition rules (mixtures of different character types, for example) on memorized secrets
Earliest draft in Wayback Machine, dated June 2016. Lots of other good stuff from 800-63 dates back this early too.
https://web.archive.org/web/20160624033024/https://pages.nis...
They are also already limiting (weakly) the max number of devices that can playback which requires some level of device identification, just not at the confidence required for authentication.
Some services already thinks like that, like I think discord.
The former usually means something between nothing at all and “you can do it but you have to write paperwork that no one will actually read in detail, but someone will maybe check the existence of, if you do”.
The latter means “do it and you are noncompliant”.
Worse supposedly this is for security, but attackers which pulled of a privilege escalation tend to have enough ways to make sure that non of this detection finds them.
In the end it just makes sure you can't mess with your own credit card 2FA process by not allowing you to control the device you own.
From where I sit right now, I have within arms reach my MacBook, a Win11 Thinkpad, a half a dozen Raspberry Pis (including a 400), 2 iPhones only one of which is rooted, an iPad (unrooted) a Pinebook, a Pine Phone, and 4 Samsung phones one with its stock Android7 EOLed final update and three rooted/jailbroken with various Lineage versions. I have way way more devices running open source OSen than unmolested Apple/Microsoft/Google(+Samsung) provided Software.
My unrooted iPhone is the only one of them I trust to have my banking app/creds on.
I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it, but they might be already, I only ever really use it on my AppleTV and my iPad. I expect I’d be able to use it on my MacBook and thinkpad, but could be disappointed, I’d be a bit surprised if it ran on any of my other devices listed…
If you want to get a package that is in the Arch core/ repo, doesnt that require a form of attestation?
I just don’t see a slippery slope towards dropping support for unofficial clients, we’re already at the bottom where they are generally and actively rejected for various reasons.
Still, the Android case is admittedly disturbing, it feels a lot more personal to be forced to use certain OS builds; that goes beyond the scope of how I would define a client.
If you want to use free software, only commect to Affero GPL servces and don't use nonfree services, and don't consume nonfree content.
No, it isn't. It's a way for corporations and governments to restrict what people can do with their devices. That makes sense if you're an employee of the corporation or the government, since organizations can reasonably expect to restrict what their employees can do with devices they use for work, and I would be fine with using a separate device for my work than for my personal computing (in fact that's what I do now). But many scenarios are not like that: for example, me connecting with my bank's website. It's not reasonable or realistic to expect that to be limited to a limited set of pre-approved software.
The correct way to deal with untrusted software on the client is to just...not trust the software on the client. Which means you need to verify the user by some means that does not require trusting the software on the client. That is perfectly in line with the "zero trust" model advocated by this memo.
I think we'd do well to provide the option to use open protocols when possible, to avoid further entrenching the Apple/Google duopoly.
That's fine for employees doing work for their employers. It's not fine for personal computing on personal devices that have to be able to communicate with a wide variety of other computers belonging to a wide variety of others, ranging from organizations like banks to other individuals.
Suppose the course I've been studying for the past three years now uses $VideoService, but $VideoService uses remote attestation and gates the videos behind a retinal scan, ten distinct fingerprints, the last year's GPS history and the entire contents of my hard drive?¹ If I could spoof the traffic to $VideoService, I could get the video anyway, but every request is signed by the secure enclave. (I can't get the video off somebody else, because it uses the webcam to identify when a camera-like object is pointed at the screen. They can't bypass that, because of the remote attestation.)
If I don't have ten fingers, and I'm required to scan ten fingerprints to continue, and I can't send fake data because my computer has betrayed me, what recourse is there?
¹: exaggeration; no real-world company has quite these requirements, to my knowledge
Which will continue marching forward without pro-user legislation. Which is extraordinarly unlikely to happen since the government has vested interest in this development.
I'd feel 100% differently about this stuff if the NSA or some other cybersecurity gov arm making these rules used their massive cybersecurity budgets to provide free MFA, TLS, encrypted DNS, etc., whether US gov hosted or via non-profit (?) partners like LetsEncrypt.
OSS & free software otherwise has a huge vendor tax to actually get used. As is, this feels like economic insecurity & anti-competition via continued centralization to a small number of megavendors. Rules like this should come with money, and not to primes & incumbents, but utility providers.
Sure, our team is internally investing in building out a lot of this stuff, but we have security devs & experience, while the long tail of software folks use doesn't. The gov sets aside so much $$$$ for the perpetual cyber war going on, but not for simple universal basics here :(
"Log in by tapping Yes on my phone"
and
actually using a FIDO2 USB key.
By the time we implement any of these things, if ever, they certainly won't be. I work on military networks and applications, and it's hard for me to believe that I'll see any of this within my career at the pace we move. This is the land of web applications that only work with Internet Explorer, ActiveX, Siverlight, Flash, and Java Applets, plus servers running Linux 2.6 or Windows Server 2012.
The idea of "Just-in-Time" access control where "a user is granted access to a resource only while she needs it, and that access is revoked when she is done" is terrifying when it takes weeks or months to get action on support tickets that I submit (where the action is simple, and I tee it up with a detailed description of whatever I need done).
there are specific contexts where you want to distribute information as widely as possible, and in those contexts it makes sense to allow any software versions to access the information. but for contexts where security is important, that means verifying the client software isn't compromised.
https://reproducible-builds.org/
Agreed that people should have the freedom to modify their software though.
The essential purpose of my comment was only to correct my parent on the date.
A remote system asked to promise it's what it says it is: the illusion of security.
Jailbreaking, DRM, etc are all evidence of this illusion.
To be clear, I don’t have a better solution. But all the second factor stuff is fundamentally broke when you are likely to need access to the service most.
The issue with TOTP is that it’s usually not rate limited. Just sit there guessing codes and you’ll eventually get in.
When you use WebAuthn to sign into an site the browser takes responsibility for determining which site you're on, cutting out the whole phishing problem of "Humans don't know which site it is". The browser isn't reading that GIF that says "Real Bank Secure Login" at the top of the page or the title "Real Bank - Authenticate" or the part of the URL bar that says "/cgi-bin/login/secure/realbank/" it is looking only at the hostname it just verified for TLS which says fakebank.example
So the browser tells your FIDO authenticator OK, we're signing in to fakebank.example - and that's never going to successfully steal your Real Bank credentials because the correct name is cryptographically necessary for the credentials to work. This is so effective crooks aren't likely to even bother attacking it.
Ordinary users think the fact your phishing site accepted their TOTP code is actually reassuring. After all, if you were a fake site, how would you have known that was the correct TOTP code? So this must be the real site.
The only benefit TOTP has over passwords is that an attacker needs to use it immediately, but they can fully automate that process so this only very slightly raises the barrier to entry, a smart but bored teenager can definitely do it, or just anybody who can Google for the tools.
Worse, TOTP involves a shared secret, so bad guys can steal it without you knowing. They probably won't steal it from your bank because the bank has at least some attempt at security, but a lot of other businesses you deal with aren't making much effort, and so your TOTP secret (not just the temporal codes) can be stolen, whereupon all users of that site relying on TOTP are 100% screwed.
Notice that WebAuthn still isn't damaged if you steal the auth data, Google could literally publish the WebAuthn authentication details (public key and identifier) for their employees on a site or paint them on a huge mural or something and not even make a material difference to their security - which is why this Memo says to do WebAuthn.
The point of these restrictions is to ensure that your device isn't unusually vulnerable to privilege escalation in the first place. If you let them, some users will root their phone, disable all protections, install an malware-filled Fortnite apk from a random website then stick their credit card company with the bill for fraud when their user-mangled system fails to secure their secrets.
You want to mod the shit out of your Android phone? Go ahead. Just don't expect other companies to deal with your shit, they're not obligated to deal with whatever insecure garbage you turn your phone into.
Everything you said cannot be further from the truth.
Luckily, it's a dwindling power and Europe fights and penalizes large organizations breaching market "morals".
I'd also hope that businesses care about more than 80% of attacks, preferably they should care about 100% of attacks. Hence, pre-approved software restrictions.
And this has happened before, with Intel ME that was and still is useful if you have a fleet of servers to manage but a hell of a security hole outside of corporate world.
And now that Windows 11 all but requires a working TPM to install (although there are ways to bypass it for now), I would not be surprised if Netflix and the rest of the content MAFIAA would follow their Android approach and demand that the user have Secure Boot enabled, only Microsoft-certified kernel drivers loaded and the decryption running in an OS-secured sandbox that even a Local Administrator-level account can access.
I don't want the consular officials to be unable to authenticate me in a foreign country because I lost my phone, or for my bank to be unable to release funds because I don't have their card or my Security Key, but I feel 100% OK with losing access to Gmail or Hacker News, or whatever for say a few days until I can secure replacement credentials.
But attestation can mean a lot of things and isn't inherently in conflict with free software. For example, at my company we validate that laptops follow our corporate policy, which includes a default-deny app installation policy. Free software would only, in theory, need a digital signature so that we could add that to our allowlist.
It's just subtle enough (e.g. lower definition but will still play) and most people use "secure" enough setups that only techies, media gurus, or that one guy who's still using a VGA monitor connection end up noticing
The computers in any sizable business already have the pre-approved restrictions set on the OS level. Employers can’t just install any software.
Valve has taken a less heavy-handed approach and let users have more freedom over their client and UI, but they also have a massive bot problem in titles like TF2.
I can’t connect to my work network from a random client, and it will throw flags and eventually block me if I connect with an out-of-date OS version.
I can’t present any piece of paper with my banking data and a signature on it and expect other parties to accept it. I have to present it on an authorized document.
I guess money may be the common denominator here.
I worked at a place that only allowed "verified" software before and it's an ongoing battle to keep that list updated. Things like digital signatures can be pretty reliable but if you're version pinning you can make it extremely difficult to quickly adopt patched versions when a vulnerability comes out.
My vanilla LineageOS install fails but I can root with Magisk, enable Zygisk to inject code into Android, edit build properties, add SafetyNet fix and now my device is good to go?
It's crazy to think the workaround is "enable arbitrary code injection" (Zygisk)
You just described the usage pattern of a pilot with a family, a truck driver, a seaman, etc.
It’s only unusual if your definition of usual is “relatively rich, computer power user”.
I travelled a lot for work, and never had issues with account access. Nor did my wife ever have issues related to accounts. We don't share Google accounts though. It sounds like that user has personal accounts being used by three people for business use... Which isn't "A seaman and his family".
Google's login protection mechanisms seem to be satisfied by TOTP usage, and you won't be locked out anymore (or at least much less likely to be).
I too have no faith of seeing this stuff implemented anytime soon...
[1] (Authority to Operate, basically approval from the highest IT authorities to utilize something on a DoD network)
This depends on how far down the rabbit hole you want to go, if it was secureboot, only signed processes can run, would that make you feel better ? If it doesn't.. what would ?
Or a more recent example: my father forgot to bring his Android phone back abroad which subsequently locked him out of his account/services; had to wipe it for him to get his access back.
https://developer.android.com/training/safetynet/attestation
edit: The source of my claim that governments tend to extend surveillance is pretty well documented I believe. So much so that I believe it is worthy to insert the problem into debates about anything relating to security. Because security often serves as the raison d'être for such ambitions.
Yes. Everyone having their own distinct accounts is a property of high computer literacy in the family.
Many of my older extended family members have a single email account shared by a husband and wife. Or in one case the way to email my aunt is to send an email to an account operated by a daughter in a different town. Aunt and daughter are both signed in so the daughter can help with attachments or “emails that go missing”, etc.
> Which isn't "A seaman and his family".
The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
As for lost luggage, I carry mine on my keychain, another one in the laptop itself (USB-C Yubikey) and one in my safe at home - if all three are ever destroyed or lost I also have backup codes available as password protected notes on several devices.
You need a bank account to do basically anything and yet consumer banking is largely unregulated (in the consumer relation sense, they are regulated on the economic side of course). Payments take upwards of 24h and only during work hours (?!?), there are no "easy switch" rewuirements, mobile apps use shit like SafetyNet and I've had banks legit tell me "just buy a phone from this list of manufacturers"... PSD2 is trash that only covers B2B interoperability and mandates a security method that has been known as broken since its invention (SMS 2FA).
> they're not obligated to deal with whatever insecure garbage you turn your phone into
Banks probably should be obligated to let you connect over standard protocols.
If you don't own your own device and rely on third-party devices to access the service, good luck to you...
As usual with the "personas" scenarios, people creates their unrealistic scenario (just like when talking about UX or design). These personas you are describing will probably fall back to low-tech methods in most of the cases, they won't fail to take a plane because GMail locked them out due to unusual activity when they are trying to show the ticket QR in the airport. They will just print it (or have someone print it for them) beforehand.
> The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
You just missed to add that they use their shared email to communicate between them by using the "Sent" folder. To be more realistic, the seaman right after buying his Android phone will create without realizing a new Google account because he doesn't probably know that he could use the email account he is already using at home. But, enough with made-up examples to prove our own points.
Depends what you think big corporations' centrally managed IT equipment is like.
Theoretically, it could mean you get precisely the right tools to do your job, with the ideal maintenance and configuration provided effortlessly.
But for some organisations, it means mandatory Internet Explorer and Flash for compatibility with some decrepit intranet, crapware like McAfee that slow the system to a crawl, baffling policies like not letting you use an adblocker, and regular slow-to-install but unavoidable updates that always happen just as you're giving that big presentation.
> They will just print it (or have someone print it for them) beforehand.
Yes, they will do that precisely because they do not trust technology to work for them because it frequently does not! I have family members like this. I log in to their accounts on my devices for various reasons. Even worse, I run Linux. We run in to these problems frequently. Spend time helping technically illiterate people with things. While doing so, make a concerted effort to understand why they say or think some of the things that they do.
Edit to add, I find it amusing that you make fun of his seaman example. Almost that exact scenario (in terms of number of devices, shared devices, and locations) is currently the case for two of my relatives. Two! And yet you ridicule it.
Currently, you'd have to do find an unlocked phone, hope there is a downloadable factory image, re-flash, re-lock, re-install to run whatever needs attestation. Potentially using something like Android's DSU feature, this could all be a click or two, and you could be back running Lineage with a restart.
If someone just put a fake domain that proxies everything between you and the server (with fake domain with HTTPS... which he social engineered you to get on)
Looks like FIDO2 2FA only sign the challenge response against the server certificate available locally (= the phishing domain) so just passing it to the original server will fail. Also, the attacker can't just re-sign the challenge response after you, because the challenge was sent from the original server already encrypted with the public key of the user (stored from the registration phase). So only the registered user can see the challenge and respond to it.
This leaves only 2 options to do a phishing attack: 1) Get a valid certificate for the original domain [1] 2) force downgrade the user to old TOTP [2]
E.g. with an credit card.
Due to the way it integrates into websites (or more specifically doesn't) classical approaches like SMS 2FA (insecure anyway) but also TOTP or FIDO2 do not work.
Instead a notification is send to a preconfigured app where you then confirm it.
Furthermore as the app and payment might be on the same device the app uses the fingerprint reader/(probably some Google TPM/secrets API idk.).
Theoretically other approaches should work, but practically they tend to not work reliable or at all in most situations.
Technically web based solutions could be possible by combining a FIDO stick with browser based push notifications, practicality they (Banks) bother or there are legal anoyences.
I think we'd do well to provide the option to use open protocols when possible.
Of course, the PR copy just writes itself, doesn't it? AD administrators, Apple and Google, banks and everyone else can benefit from context aware authorization.If the state of your phone is stolen or "compromised", you want immediate Peace of Mind.
Even if it's just misplaced, having that kind of flexibility is just great.
This is already the case with Netflix -- 4k video content cannot be played on Linux.
Much appreciate the suggestion!
1) The requirements themselves. These are different for consumer vs employee type scenarios. So general, I'd prefer we err on the side of DRM free for things like media, but there are legitimate concerns around things like data privacy when you are an employee of an organization handling sensitive data.
2) Presuming there are legitimate reasons to have strong validation of the user and untampered software, we have the choice of A) using only organization supplied hardware in those case or B) using your own with some kind of restriction. I'd much prefer to use my own as much as possible ... if I can be ensured that it won't spy on me, or limit what I can do, for the non-organization specific purposes I've explicitly opted-in to enable.
> I'm uncomfortable letting organisations have control over the software that runs on my hardware.
I'm not, if we can sandbox. I'm fine with organizations running javascript in my browser for instance. Or running mobile apps that can access certain data with explicit permissions (like granting access to my photos so that I can share them in-app). I think we can do better with both more granular permissions, better UX, and cryptographic guarantees to both the user and the organization that both the computation and data is operating at the agreed level.
it fails to do so in many ways, including not blocking old, no longer maintained, known to be vulnerable android releases
it also has little to do with moding and more with having a proper working free marked which allows alternatives besides Google and Apple
Also this is about the second factor in 2FA not online banking.
Which you can do on a completely messed up computer.
I'm also not asking to be able to do pay contactless with a degoogled Android phone.
Similar I'm but asking to not have 2FA, you can use stuff like a FIDO stick with your phone.
Most of this "security" features are often about Banks pretending to have proper 2FA without a second device... (And then applying them to other apps they produce, too).
Any system can have malware. That's not the point. To repeat my point again: client restrictions are about making sure user devices are not unusually vulnerable to malware. For example, any Windows device may be infected with malware, but if you're still running Windows XP you're vulnerable to a much larger variety of known malware and more severe exploits. Hence why businesses will want to support only modern versions of eg Chrome which itself will require modern versions of operating systems.
Android will block non-Play-Store app installations by default, and root is required for lower level access/capabilities that can bypass the normal sandbox.
I'm honestly not sure what you're saying about 2FA in the rest of your comment, it's kind of vague and there are some possible typos/grammar issues that confuse me. What exactly are you referring to when you say "pretending to have proper 2FA"?
Presumably (hopefully) these are corporate-owned devices, with a policy like that. Remote attestation is fine if it's controlled by the device's owner, and you can certainly run free software on such a device, if that particular build of the software has been "blessed" by the corporation. However, the user doesn't get the freedoms which are supposed to come with free software; in particular, they can't build and run a modified version without first obtaining someone else's approval. At the very least it suggests a certain lack of respect for your employees to lock down the tools they are required to use for their job to this extent.
I'm not asking to use a 10 year old version of android that no modern browsers support any more and is missing many security features.
No, you basically have to click on ok once (or change a setting, depending on phone), either way it doesn't require root, and doesn't really change the attack scenario as it's based one someone intentionally installing an app from an arbitrary not-trusted source.
> root is required
Yeah, like privilege escalation attacks. As you will likely find in many compromised apps. And which on many Android phones work due to vendors not providing updates after some time. And many other reasons.
> What exactly are you referring to when you say "pretending to have proper 2FA"?
EU law says they need to provide 2FA for only banking.
Banks often don't do that for banking apps as it's inconvenient. Instead they "split the banking app in two parts" and maybe throw some finger pint based auth mechanism in and claim they have proper 2FA auth. (Because it's two app processes running and requires the fingerprint.) Through repeatedly security researchers have shown that its not a good idea.
Additionally they then require you to only use your fingerprint, not an additional password....
Either way, the point is that secure online banking doesn't requires locked down devices in general.
Checking for an too old & vulnerable is where you start.
And then you can consider to maybe also block other stuff.
There is nothing inherently less secure about an rooted device.
Sure you can make it less secure if you install bad software, but you can also make it more secure.
Or you just need to lower the minimal screen brightness for accessibility reasons.
Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
But if we say that is ok, then we first need to start to ban cars, because you could drive into a wall with it, and knifes, also no way to have a bath tube you could drown yourself.
And yes that is sarcastic, but there is a big difference between something being "inherently insecure" (driving without belt) or by default is in no way less secure as long as you don't go actively out of your way to make it less secure (by e.g. disabling security protections).
And for many of the SaaS that we use, TOTP doesn't help you avoid the security lock outs.
I hold the reverse view. The only security token I'd trust is the only thing that isn't open is the private keys the device generates when you press the reset button. The rest meaning from the CPU up (say RISC-V) and the firmware must be open to inspection by anybody. In fact, it should also be easy to peel away the silicon protection so you can see everything bar the cells storing the private keys. The other non-negotiable is the thing that computes and transmits the "measures" of the system being attested to (including it’s own firmware) can not be changed - meaning no stinking "security" patches are allowed at that level. If it's found broken, throw it away as the attestation is useless.
The attestation then becomes the device you hold is faithful rendering / compiling of open source design document X by open source compiler Y. And I can prove that myself, by doing building X using Y and verifying the end result looks like the device I hold. This process is also known as reproducible builds.
What we have now (eg, YubiKeys) is not that. Therefore I have to trust Yubi Corp. To see what that's a problem, see the title of this story. It has the words "Zero-Trust" in it.
In reality of course there is no such thing as "Zero-Trust". I will never be able to verify everything myself, ergo I have to trust something. The point is there is a world of difference between trusting an opaque black box like Yubi Corp, and trusting an open source reproducible build, where a cast of random thousands can crawl over it and say, "it seems OK to me". In reality it's not the ones that say "it seems OK" you are trusting. You are trusting the mass media (places like this in other words), to pick up and amplify the one voice among millions that says "I've found a bug - and because it's open I can prove it" so everyone hears it.
So to me it looks to be the reverse of what you say. Remote attestation won't kill software freedom. Remote attestation, done in a way that we can trust, must be built using open source. Anything less simply won’t work.
No one wants to reproduce an attestation. If you could, it could be copied, and if you can copy an attestation any hardware could send it to prove it was something else - something the other end trusts, rendering is useless for it's intended purpose.
However, the attestation is attesting the hardware you are running on is indeed "reproduced", as in it is a reliable copy of something the other end trusts. It could be a device from Yubi Key and in effect you are trusting Yubi Corp's word on the matter. Or, it could be an open source design everybody can inspect, reproducibly rendered in hardware and firmware. Personally, I think trusting the former is madness, as is trusting the latter without a reproducible build.
I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
Edit: Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users.
Good security is layered. Just because privilege escalation attacks are sometimes possible without root doesn't mean you throw open the floodgates and ignore the threat of root. The point of banning rooted devices is that privilege escalation attacks are much easier in rooted devices.
Of course online banking doesn't require locked down devices, but online banking is more secure in locked down devices. I don't see why banks should weaken their security posture on root just because they aren't perfect in other areas.
The motivation is not "just" that, or for fun, the motivation is that users should be allowed to control their own devices. And have them keep working.
> I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
I want it to work... exactly like app permissions. Where if I root it, I can override things.
> Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users
Having that kind of sysadmin lockdown is useful, but if I want to be my own sysadmin I shouldn't be blacklisted by banks.
This is clearly wrong, rooted devices are much more insecure because they enable low level access to maliciously alter the system. Malware often requires root and will first try to attempt to attain root, which of course isn't necessary if a user has manually unlocked root themselves.
> Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
No one is taking away any user's agency. Users are free to root their phones if they wish (many Android phones at least will allow it), but companies are also free to deny these users service. Users are free to avail themselves of any company's service on a non-rooted phone. "Not using rooted phones to access anything you like" is hardly a major loss of agency.
Phone insecurity is very dangerous IMO, much more dangerous really than bathtubs or perhaps knives. You could argue that vehicles are similarly very dangerous and I'd agree. I don't think we're very far off from locked down self-driving cars. Unfortunately we're not there yet with self-driving tech and the current utility of vehicles still outweighs their immense safety risks. You can't really say that about rooted phones. The legitimate benefits of a rooted phone are largely relevant to developers, not the average user, and most users never attempt to tinker with their phone.
If you can't proceed with a normal life after you root you phone you are NOT free to do so but instead get punished when doing so.
> If you can't proceed with a normal life after you root you phone you are NOT free to do so but instead get punished when doing so.
Freedom to root doesn't mean freedom from the consequences of rooting. Banking apps are hardly necessary for a normal life, and neither is rooting.
The hole in this reasoning is that you don't need the app; you can just sign into the bank's website from the mobile browser, and get all the same functionality you'd get from the app. (Maybe you don't get a few things, like mobile check deposits, since they just don't build features like that into websites for the most part.) The experience will sometimes be worse than that of the app, but you can still do all the potentially-dangerous things without it. So why bother locking down the app when the web browser can do all the same things?
> I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it
I actually canceled my HBO Max account when, during the HBO Now -> HBO Max transition, they somehow broke playback on Linux desktop browsers. When I wrote in to support, they claimed it was never supported, so they weren't obligated to care. I canceled on the spot.