1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.
Isn’t exposing your internal domains and systems outside VPN-gated access a risk? My understanding is this means internaltool.faang.com should now be publicly accessible.
Think about it this way: In the context of ransomware attacks, a lot of times it's game over once an internal agent is compromised. The premise of zero trust is that once an attacker is "inside the wall", they gain basically nothing. Compromising one service or host would mean having no venue for escalation from there.
I wouldn't say it's objectively better (maybe by the time I retire I can make a call on that), but it's a valid strategy. Certainly better than relying on perimeter-based security like VPN alone, as opposed to it being just one layer of DiD, though.
[1] Perimeter-oriented security thinking is probably the #1 enabler for ransomware and lateral movement of attackers in general.
Its simple amazing.
They even call out the fact that it's a proven bad practice that leads to weaker passwords - and such policies must be gone from government systems in 1 year from publication of the memo. It's delightful.
Btw - I'd love to see the people who put this memo together re-evaluate the ID.me system they're implementing for citizens given how poor the identity verification is.
Where does it say they should go away?
It's a very frustrating situation. Worst of both worlds.
0: trade only works if the sum of your trust in the legal system, intermediates, and counterparts reaches some threshold. The same is true of any interaction where the payoff is not immediate and assured, from taxes to marriage and friendship, and, no, it is not possible to eliminate it, nor would that be a society you’d want to live in. The only systems that do not rely on some trust that the other person isn’t going to kill them are maximum-security prisons and the US president’s security bubble. Both are asymmetric and still require trust in some people, just not all.
This screams "we'll use more post-it notes for our passwords compared to before", or maybe the real world to which this memo is addressed is different compared to the real (work-related) world I know.
First, in the days before mobile bank-id, they sent windows-only hardware as I recall. Then came the days of letters/cards/hardware getting lost in the mail.
I gave up on it in the end. I have multiple things (banking-wise) I no longer have online access to because of it.
If you're going to make one system to rule them all you need to make sure the logistics actually work.
Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
https://blog.trezor.io/why-you-should-never-use-google-authe...
Edit - sorry that this is really an ad for the writer's products. On the other hand, there's a hell of a bounty for proving them insecure / untrustworthy, whatever your feelings on "the other crypto".
Detecting changes — and enforcing escalation in that case — can be enough, e.g. "You always uses Safari on macOS to connect to this restricted service, but now you are using Edge on Windows? Weird. Let's send an email to a relevant person / ask for a MFA confirmation or whatever."
The reason this was so "easy" for Google (and some other companies, like GitLab[1]) to realize most of these goals is that they are a web-based technology company - fundamentally the tooling and scalable systems needed to get started were web so the transition were "free". Meaning, most of the internal apps were HTTP apps, built on internal systems, and the initial investment was just to make an existing proxied internal service, external and behind a context aware proxy [1].
The hard part for most other companies (and the DoD) is figuring out what to do with protocols and workflows that aren't http or otherwise proxyable.
[1] https://cloud.google.com/iap/docs/cloud-iap-context-aware-ac...
[2] https://about.gitlab.com/blog/2019/10/02/zero-trust-at-gitla...
Oh, and if you choose to not participate in this system, enjoy trying to find out the results of your covid test :-) (I ended up getting a Buypass card, but they officially support only Windows and macOS.)
"Actions … 4. Agencies must identify at least one internal-facing FISMA Moderate application and make it fully operational and accessible over the public internet."
Words matter. If nothing else, laypersons hear these terms and shape their understanding assuming based on what it sounds like.
Secure attestation about device state requires something akin to Secure Boot (with a TPM), and in the context of a BYOD environment precludes the device owner having full control of their own hardware. Obviously this is not an issue if the organization only permits access to its services from devices it owns, but no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.
Yes, but for government software this is a bog-standard approach. Not even "the source code is publicly viewable to everyone" is sufficient scrutiny to pass government security muster; specific code is what gets cleared, and modifications to that code must also be cleared.
It's even worse with texted codes because it's inherently credible in the moment because the message knows something you feel it shouldn't --- that you just got a 2FA code. You have to deeply understand how authentication systems work to catch why the message is suspicious.
You can't fix the problem with user education, because interacting with your application is almost always less than 1% of the mental energy your users spend doing their job, and they're simply not going to pay attention.
(Graphical keyboards are an old technique to try to defeat key loggers. A frequent side effect of a site using a graphical keyboard is that the developer has to make the password input field un-editable directly, which prevents password managers from working, unless you use a user script to make the field editable again.)
The right way to set this stuff up is to have a strong modern VPN (preferably using WireGuard, because the implementations of every other VPN protocol are pretty unsafe) with SSO integration, and to have the applications exposed by that VPN also integrate with your SSO. Your users are generally on the VPN all day, and they're logging in to individual applications or SSH servers via Okta or Google.
"RIP VPNs" is not a great take.
I'm only half joking.
It's really just a matter of changing gears - you carry a physical key to your house, car, and your online life. You lose the key, you have to go through a bit of pain to get a new one.
But establishing that norm is beyond the purview of anyone it seems.
Perhaps one of those advanced Nordic countries will have the wherewithal, it seems Estonia is ahead of all of us but we don't pay attention.
But this doc looks good.
It was expedient but banks are not the orgs. that should be running that.
Every nation needs to turn their Drivers ID and Passport authorities into 'Ministry of Identity' and issue fobs, passwords that can be used on the basis of some standard. Or something like that, maybe quasi distributed.
I use LineageOS on my phone, and do not have Google Play Services installed. The phone only meaningfully interacts with a very few and most basic Google services, like an HTTP server for captive portal detection on Wifi networks, an NTP server for setting the clock, etc. All other "high-level" services that I am aware of, like Mail, Calendaring, Contacts, Phone, Instant Messaging, etc., are either provided by other parties that I feel more comfortable with, or that I actually host myself.
Now let's assume that I would want or have to do online/mobile banking on my phone - that will generally only work with the proprietary app my bank provides me with. Even if I choose to install their unmodified APK, (any lack of) SafetyNet will not attest my LineageOS-powered phone as "kosher" (or "safe and secure", or "healthy", or whatever Google prefers calling it these days), and might refuse to work. As a consequence, I'm effectively unable to interact via the remote service provided by my bank, because they believe they've got to protect me from the OS/firmware build that I personally chose to use.
Sure, "just access their website via the browser, and do your banking on their website instead!", you might say, and you'd be right for now. But with remote attestation broadly available, what prevents anyone from also using that for the browser app on my phone, esp. since browser security is deemed so critical these days? I happen to use Firefox from F-Droid, and I doubt any hypothetical future SafetyNet attestation routine will have it pass with the same flying colors that Google's own Chrome from the Play Store would. I'm also certain that "Honest c0l0's Own Build of Firefox for Android" wouldn't get the SafetyNet seal of approval either, and with that I'd be effectively shut off from interacting with my bank account from my mobile phone altogether. The only option I'd have is to revert back to a "trusted", "healthy" phone with a manufacturer-provided bootloader, firmware image, and the mandatory selection of factory-installed, non-removable crapware that I am never going to use and/or (personally) trust that's probably exfiltrating my personal data to some unknown third parties, sanctified by some few hundreds of pages of EULA and "Privacy" Policy.
With app stores on all mainstream and commercially successful desktop OSes, the recent Windows 11 "security and safety"-related "advances" Microsoft introduced by (as of today, apparently still mildly) requiring TPM support, and supplying manufacturers with "secure enclave"-style add-on chips of their own design ("Pluton", see https://www.techradar.com/news/microsofts-new-security-chip-...), I can see this happening to desktop computing as well. Then I can probably still compile all the software I want on my admittedly fringe GNU/Linux system (or let the Debian project compile it for me), but it won't matter much - because any interaction with the "real" part of the world online that isn't made by and for software freedom enthusiasts/zealots will refuse to interact with the non-allowlisted software builds on my machine.
It's going to be the future NoTCPA et al. used to combat in the early 00s, and I really do dread it.
1) No verification that the user trusts that particular bank to perform this service. Most banks just deployed BankID for all their customers.
2) No verification between bank and government ensuring that particular person can be represented by particular bank. In principle a bank could inpersonate a person even if that person have no legal relation with that bank.
3) Bank authentication is generally bad. Either login+SMS, or proprietary smartphone applications. No FIDO U2F or any token based systems.
Fortunately, there are also alternatives for identification to government services:
1) Government ID card with smartcard chip. But not everyone has a new version of ID card (old version does not have chip). It also requires separate hardware (smartcard reader) and some software middleware.
2) MojeID service (mojeid.cz) that uses FIDO U2F token.
Disclaimer: working for CZ.NIC org that also offers MojeID service.
This memo in particular emphasizes the existing guidance the US government has issued around not expiring passwords. If you are a federal agency, you can have (and are in fact encouraged to have!) users with passwords that are unchanged for years.
Edit: it's worth pointing out that the memo does a great job of laying this out. I work in security, so possibly there's some curse of knowledge at play, but I found the blog post explainer to be less clear than the memo it is explaining...
But maybe they didn't bother giving much more effort to better passwords because they really don't want those to stick around at all and good for them. Password managers themselves are a bandaid on the fundamentally bad practice of using a symmetric factor for authentication.
I know tech operates on different definitions/circumstances here. That’s why the word ”zero” is so wrong here, because it seems to go out of its way to make the claim that less trust ks always better.
Call it “zero misplaced trust” or “my database doesn’t want your lolly”, whatever.
It seems like the sensible rule of thumb is: If your organization needs that level of control, it's on your organization to provide the device.
For context, Impossible Travel is typically defined as an absolute minimum travel time between two points based on the geographical distance between them, with the points themselves being derived from event-associated IPs via geolocation
The idea is that if a pair of events breaches that minimum travel time by some threshold, it's a sign of credential compromise; It's effective for mitigating active session theft, for example, as any out of region access would violate the aforementioned minimum travel time between locations and produce a detectable anomaly
> I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.
Is that really a problem? In practice wouldn't it just mean you can only use employer-provided and certified devices? If they want to provide their employees some Free Software-based client system, that configuration would be on the whitelist.
> “discontinue support for protocols that register phone numbers for SMS or voice calls, supply one-time codes, or receive push notifications."
... necessarily means TOTP.
Could be argued "supply" means code-over-the-wire, so all 3 being things with a threat of MITM or interception: SMS, calls, "supply" of codes, or push. Taken that way, all three fail the "something I have" check. So arguably one could take "supply one-time codes" to rule out both what HSBC does, but also what Apple does pushing a one-time code displayed together with a map to a different device (but sometimes the same device).
I'd argue TOTP is more akin to an open soft hardware token, as after initial delivery it works entirely offline, and passes the "something I have" check.
SMS are bad due to MITM and SIM cloning. In EU many banks still use smsTAN, and it leads to lots of security breaches. It's frustrating some don't offer any alternatives.
However, is FIDO2 better than chipTAN or similar? I like simple airgapped 2FAs, but I'm not an expert.
> As I understand it, this sentence says that the application should be safe even if it was exposed to the public internet, not that it needs to be exposed.
TOTP apps are certainly better than getting codes via SMS, but they're still susceptible to phishing. The normal attack there is that the attacker (who has already figured out your password) signs into your bank account, gets the MFA prompt, and then sends an SMS to the victim, saying something like "Hello, this is a security check from Your Super Secure Bank. Please respond with the current code from your Authenticator app." Then they get the code and enter it on their side, and are logged into your bank account. Sure, many people will not fall for this, but some people will, and that minority still makes this attack worthwhile.
A hardware security token isn't vulnerable to this sort of attack.
In the case of client attestation, this is how we get "Let Google/Apple/Microsoft handle that, and use what they produce."
And as a end state, leads to a world where large, for-profit companies provide the only whitelisted solutions, because they're the largest user bases and offer a turn-key feature, and the market doesn't want to do addition custom work to support alternatives.
Or push, or other supply of a code from somewhere. It's just oddly worded, sounding like the code in all 3 cases is coming over the wire.
Granted, phishing is a diff story, but in practice, I see Yubikeys permanently inserted to their laptop hosts, requiring even less intervention.
And regardless, if you do want a national US ID, you just get a passport, and it'll be accepted as a form of ID everywhere a state-issued driver's license or state ID is accepted. Of course, in this case it's technically voluntary, and many Americans don't travel internationally and don't bother to get a passport.
I'm curious about the DNS encryption recommendation. My impression was that DNSSEC was kind of frowned upon as doing nothing that provides real security, at least according to the folks I try to pay attention to. Are these due to differing perspectives in conflict, or am I missing something?
But I think the point of your parent comment's reply was that the inevitable adoption of this same techonology in the consumer-level environment is a bad thing. Among other things, it will allow big tech companies to have an stronger grip on what software/platforms are OK to use/not use.
If your employer forces you to, say, only use a certain version of Windows as your OS in order to do your job, that's generally acceptable to most people.
But if your TV streaming provider tells you have to use a certain version of Windows to consume their product, that's not considered acceptable to a good deal of people.
A VPN is another failure layer that when it goes down all of your remote workers are hosed. The productivity losses are immense. I've seen it first-hand. The same for bastion hosts. Some tiny misconfiguration that sneaks in and everybody is fubared.
Bastion hosts and VPNs: we have better ways of protecting our valuables that's also a huge win for worker mobility and security.
It has been a long, slow but steady march in this direction for a while [1]. Eventually we will also bind all network traffic to the individual human(s) responsible. 'Unlicensed' computers will be relics of the past.
100% agreed. My first thought upon seeing the title of the article was "and we trust that you did read it?"
The term "zero trust" certainly has a very dystopian connotation to me. It reminds me of things like 1984.
We're just going to disagree about this.
Banks aren't going to want to implement any changes that cost more (in system changes and customer support) than the fraud they prevent.
TOTP is a great security enhancement, and while phishable, considerably raises the bar for an attacker.
The fact that TOTP is mentioned as a bad practice in this document is an indicator that this should not be considered a general best practices guide. It is a valid best practice guide for a particular use case and particular user base.
(These days I simply use 1Password.)
buganizer.corp.google.com is an alias for uberproxy.l.google.com.
uberproxy.l.google.com has address 142.250.141.129
uberproxy.l.google.com has IPv6 address 2607:f8b0:4023:c0b::81
Google's corp services are publicly accessible in that sense - but you're not getting through the proxy without valid credentials and (in most cases) device identity verification.
> Verifiers SHOULD NOT impose other composition rules (mixtures of different character types, for example) on memorized secrets
Earliest draft in Wayback Machine, dated June 2016. Lots of other good stuff from 800-63 dates back this early too.
https://web.archive.org/web/20160624033024/https://pages.nis...
* You don't have a central location to perform more granular access control. Per-service context aware access restrictions (device state, host location, that sort of thing) need to be punted down to the services rather than being centrally managed.
* Device state validation is either a one-shot event or, again, needs to be incorporated into the services rather than just living in one place.
I love Wireguard and there's a whole bunch of problems it solves, but I really don't see a need for a VPN for access to most corporate resources.
The people over-relying on perimeter security are the folks buying a big sixties car and assuming that seatbelts and traction control are no substitute for chrome bumpers.
They are also already limiting (weakly) the max number of devices that can playback which requires some level of device identification, just not at the confidence required for authentication.
I'd do:
* SSO integration on all internal apps.
* An authenticating proxy if the org that owned it was sharp and had total institutional buy-in both from developers and from ops.
* A WireGuard VPN otherwise.
Nobody cares. It just gets postponed forever.
I'd be flogging tailscale so hard!
Stupid policies.
Some services already thinks like that, like I think discord.
The former usually means something between nothing at all and “you can do it but you have to write paperwork that no one will actually read in detail, but someone will maybe check the existence of, if you do”.
The latter means “do it and you are noncompliant”.
Worse supposedly this is for security, but attackers which pulled of a privilege escalation tend to have enough ways to make sure that non of this detection finds them.
In the end it just makes sure you can't mess with your own credit card 2FA process by not allowing you to control the device you own.
Because there is no official national ID system, you can do virtually everything Federally with a stack of affidavits and pretty thin "evidence" that you are who you claim to be. They strongly prefer that you have something resembling ID but it isn't strictly required. This also creates a national ID bootstrapping problem insofar as millions of Americans don't have proof that they are Americans because there was never a requirement of having documentary evidence. As a consequence, government processes are forgiving of people that have no "real" identification documents because so many people have fallen through the cracks historically.
Of course, this has been widely abused historically, so the US government has relatively sophisticated methods for "duck typing" identities by inference these days.
This can be solved with DANE, which is based on DNSSEC. When properly configured, the sending mailserver will force the use of STARTTLS with a trusted certificate. The STARTTLS+DANE combination has been a mandatory standard for governmental organizations in the Netherlands since 2016.
From where I sit right now, I have within arms reach my MacBook, a Win11 Thinkpad, a half a dozen Raspberry Pis (including a 400), 2 iPhones only one of which is rooted, an iPad (unrooted) a Pinebook, a Pine Phone, and 4 Samsung phones one with its stock Android7 EOLed final update and three rooted/jailbroken with various Lineage versions. I have way way more devices running open source OSen than unmolested Apple/Microsoft/Google(+Samsung) provided Software.
My unrooted iPhone is the only one of them I trust to have my banking app/creds on.
I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it, but they might be already, I only ever really use it on my AppleTV and my iPad. I expect I’d be able to use it on my MacBook and thinkpad, but could be disappointed, I’d be a bit surprised if it ran on any of my other devices listed…
If you want to get a package that is in the Arch core/ repo, doesnt that require a form of attestation?
I just don’t see a slippery slope towards dropping support for unofficial clients, we’re already at the bottom where they are generally and actively rejected for various reasons.
Still, the Android case is admittedly disturbing, it feels a lot more personal to be forced to use certain OS builds; that goes beyond the scope of how I would define a client.
Ad #3: FIDO is basically unusable for banking. It's designed for user authentication, not transaction signatures which banks need (and must do because of the PSD2 regulation).
If you want to use free software, only commect to Affero GPL servces and don't use nonfree services, and don't consume nonfree content.
No, it isn't. It's a way for corporations and governments to restrict what people can do with their devices. That makes sense if you're an employee of the corporation or the government, since organizations can reasonably expect to restrict what their employees can do with devices they use for work, and I would be fine with using a separate device for my work than for my personal computing (in fact that's what I do now). But many scenarios are not like that: for example, me connecting with my bank's website. It's not reasonable or realistic to expect that to be limited to a limited set of pre-approved software.
The correct way to deal with untrusted software on the client is to just...not trust the software on the client. Which means you need to verify the user by some means that does not require trusting the software on the client. That is perfectly in line with the "zero trust" model advocated by this memo.
I think we'd do well to provide the option to use open protocols when possible, to avoid further entrenching the Apple/Google duopoly.
That's fine for employees doing work for their employers. It's not fine for personal computing on personal devices that have to be able to communicate with a wide variety of other computers belonging to a wide variety of others, ranging from organizations like banks to other individuals.
"This project is proof-of-concept and a research platform. It is NOT meant for a daily usage. The cryptography implementations are not resistent against side-channel attacks."
Suppose the course I've been studying for the past three years now uses $VideoService, but $VideoService uses remote attestation and gates the videos behind a retinal scan, ten distinct fingerprints, the last year's GPS history and the entire contents of my hard drive?¹ If I could spoof the traffic to $VideoService, I could get the video anyway, but every request is signed by the secure enclave. (I can't get the video off somebody else, because it uses the webcam to identify when a camera-like object is pointed at the screen. They can't bypass that, because of the remote attestation.)
If I don't have ten fingers, and I'm required to scan ten fingerprints to continue, and I can't send fake data because my computer has betrayed me, what recourse is there?
¹: exaggeration; no real-world company has quite these requirements, to my knowledge
Finally! Maybe the places I've worked will finally listen. But I stopped reading TFA to praise this, so back to TFA.
Which will continue marching forward without pro-user legislation. Which is extraordinarly unlikely to happen since the government has vested interest in this development.
When we talk about trust we often mean different things:
* In cryptography and security by "trust" we mean a party or subsystems that if they fail or are compromised then the system may experience a failure. I need to trust that my local city is not putting lead in the drinking water. If someone could design plumping that removed lead from water and cost the same to install as regular pipes than cities should install those pipes to reduce the costs of a trust failure.
* In other settings when we talk about trust we are often talking about trust-worthiness. My local city is trustworthy so I can drink the tap water without fear of lead poisoning.
As a society we should both increase trustworthiness and reduce trust assumptions. Doing both of these will increase societal trust. I trust my city isn't putting lead in the drinking water because they are trustworthy but also because some independent agency tests the drinking water for lead. To build societal trust, verify.
I wonder if that applies to all infrastructure, or just enterprise applications.
Yes, and this unreliable patchwork is already being heavily abused by surveillance companies (eg Equifax, Google, LexisNexis, Facebook, Retail Equation, etc) involuntarily storing our personal information - creating permanent records on us that we can only guess the contents and scope of, sorting us into prescriptive classes so that we can be better managed, and completely unaccountable to even their most egregious victims.
Social security numbers were promised to only be used for purposes of administering social security, and yet now they're required by many businesses for keying into that surveillance matrix. The main thing holding back more businesses from asking for identifiers is that people are hesitant to give them out.
Before there is any talk of strengthening identification, we need a US GDPR codifying a basic right to privacy. Until I'm able to fully control the surveillance industry's dossiers on me (inspection, selective deletion, prohibit future collection), I'll oppose anything that would further empower them.
I'd feel 100% differently about this stuff if the NSA or some other cybersecurity gov arm making these rules used their massive cybersecurity budgets to provide free MFA, TLS, encrypted DNS, etc., whether US gov hosted or via non-profit (?) partners like LetsEncrypt.
OSS & free software otherwise has a huge vendor tax to actually get used. As is, this feels like economic insecurity & anti-competition via continued centralization to a small number of megavendors. Rules like this should come with money, and not to primes & incumbents, but utility providers.
Sure, our team is internally investing in building out a lot of this stuff, but we have security devs & experience, while the long tail of software folks use doesn't. The gov sets aside so much $$$$ for the perpetual cyber war going on, but not for simple universal basics here :(
For instance the zero-trust system we are building at bastion-zero uses ephemeral ECDSA key pairs attested by tokens that expire.
+ When the user logs out these key pairs and tokens are deleted. Ideally tokens should be revoked as well. If an attacker installs an implant on the user's endhost and the user is not logged in the attacker doesn't get any access because there is no keys/tokens to steal. If the implant/attack is discovered prior to the user logging the device can be reset and the attacker doesn't get any access.
+ If the attacker installs an implant on the user's endhost and the user is logged in. The attacker gets the key pair and tokens. If the attacker attempts to exfil the key pair/tokens and use them from another host this may set off alarms. The attacker if they wish to be stealthy and maintain access must conduct the attack through that endhost (at least include they compromise additional systems). Once the tokens expire the attacker is locked out again.
+ If the attacker manages to watch the user login and generate the attestation to the key pair and good MFA is employed, e.g. U2F/FIDO. The attacker can not eternally get new key pairs since they can not read the secret from the MFA device.
+ As wmf suggests, monitoring helps a lot. Monitoring is extra powerful when can easily revoke the user's key pair without revoking the user. Say a user triggers an alarm, automatically revoke the key pair and see if they can reauth. If it is a stolen key pair the attacker might not be able get a new key pair issues if the actually user is offline. If you decide the device might be compromised you can disable access from that device and have user pick up a new laptop.
"Log in by tapping Yes on my phone"
and
actually using a FIDO2 USB key.
By the time we implement any of these things, if ever, they certainly won't be. I work on military networks and applications, and it's hard for me to believe that I'll see any of this within my career at the pace we move. This is the land of web applications that only work with Internet Explorer, ActiveX, Siverlight, Flash, and Java Applets, plus servers running Linux 2.6 or Windows Server 2012.
The idea of "Just-in-Time" access control where "a user is granted access to a resource only while she needs it, and that access is revoked when she is done" is terrifying when it takes weeks or months to get action on support tickets that I submit (where the action is simple, and I tee it up with a detailed description of whatever I need done).
so DNSSEC is the answer to, can I trust this IP is valid for the name news.ycombinator.com.
DNS over TLS/HTTPS just says, nobody but the DNS server I use can see I'm wanting news.ycombinator.com's IP. It's mostly useless at the moment, since other gaps exist leaking essentially the same information(SNI, etc), but it should get more useful over time, as people are working on fixing those gaps.
there are specific contexts where you want to distribute information as widely as possible, and in those contexts it makes sense to allow any software versions to access the information. but for contexts where security is important, that means verifying the client software isn't compromised.
https://reproducible-builds.org/
Agreed that people should have the freedom to modify their software though.
The essential purpose of my comment was only to correct my parent on the date.
A remote system asked to promise it's what it says it is: the illusion of security.
Jailbreaking, DRM, etc are all evidence of this illusion.
To be clear, I don’t have a better solution. But all the second factor stuff is fundamentally broke when you are likely to need access to the service most.
The issue with TOTP is that it’s usually not rate limited. Just sit there guessing codes and you’ll eventually get in.
When you use WebAuthn to sign into an site the browser takes responsibility for determining which site you're on, cutting out the whole phishing problem of "Humans don't know which site it is". The browser isn't reading that GIF that says "Real Bank Secure Login" at the top of the page or the title "Real Bank - Authenticate" or the part of the URL bar that says "/cgi-bin/login/secure/realbank/" it is looking only at the hostname it just verified for TLS which says fakebank.example
So the browser tells your FIDO authenticator OK, we're signing in to fakebank.example - and that's never going to successfully steal your Real Bank credentials because the correct name is cryptographically necessary for the credentials to work. This is so effective crooks aren't likely to even bother attacking it.
Ordinary users think the fact your phishing site accepted their TOTP code is actually reassuring. After all, if you were a fake site, how would you have known that was the correct TOTP code? So this must be the real site.
The only benefit TOTP has over passwords is that an attacker needs to use it immediately, but they can fully automate that process so this only very slightly raises the barrier to entry, a smart but bored teenager can definitely do it, or just anybody who can Google for the tools.
Worse, TOTP involves a shared secret, so bad guys can steal it without you knowing. They probably won't steal it from your bank because the bank has at least some attempt at security, but a lot of other businesses you deal with aren't making much effort, and so your TOTP secret (not just the temporal codes) can be stolen, whereupon all users of that site relying on TOTP are 100% screwed.
Notice that WebAuthn still isn't damaged if you steal the auth data, Google could literally publish the WebAuthn authentication details (public key and identifier) for their employees on a site or paint them on a huge mural or something and not even make a material difference to their security - which is why this Memo says to do WebAuthn.
The point of these restrictions is to ensure that your device isn't unusually vulnerable to privilege escalation in the first place. If you let them, some users will root their phone, disable all protections, install an malware-filled Fortnite apk from a random website then stick their credit card company with the bill for fraud when their user-mangled system fails to secure their secrets.
You want to mod the shit out of your Android phone? Go ahead. Just don't expect other companies to deal with your shit, they're not obligated to deal with whatever insecure garbage you turn your phone into.
Everything you said cannot be further from the truth.
Luckily, it's a dwindling power and Europe fights and penalizes large organizations breaching market "morals".
On the plus side, it's good that they finally figured out that forcing frequent password changes and forcing the usage of special characters are anti-patterns. I've been repeating this for over a decade.
Deprecating passwords is the wrong conclusion. A better solution would be to educate people about good password creation and handling practices. A 1-page document and/or short video would do.
I'd also hope that businesses care about more than 80% of attacks, preferably they should care about 100% of attacks. Hence, pre-approved software restrictions.
And this has happened before, with Intel ME that was and still is useful if you have a fleet of servers to manage but a hell of a security hole outside of corporate world.
And now that Windows 11 all but requires a working TPM to install (although there are ways to bypass it for now), I would not be surprised if Netflix and the rest of the content MAFIAA would follow their Android approach and demand that the user have Secure Boot enabled, only Microsoft-certified kernel drivers loaded and the decryption running in an OS-secured sandbox that even a Local Administrator-level account can access.
I don't want the consular officials to be unable to authenticate me in a foreign country because I lost my phone, or for my bank to be unable to release funds because I don't have their card or my Security Key, but I feel 100% OK with losing access to Gmail or Hacker News, or whatever for say a few days until I can secure replacement credentials.
ID.me supports WebAuthn (or maybe U2F? In this context it doesn't matter) but importantly it does identity verification so it can determine whether I am a US citizen, whether I'm a tax payer, and if so which one.
Now, perhaps the US Federal Government should own the capability to do that instead of a private company. But, so far as I can tell, they do not and login.gov is not such a thing.
But attestation can mean a lot of things and isn't inherently in conflict with free software. For example, at my company we validate that laptops follow our corporate policy, which includes a default-deny app installation policy. Free software would only, in theory, need a digital signature so that we could add that to our allowlist.
Just one example where I work is a prohibition against emailing certain types of documents or data to others in the company (which is mostly Word & Excel docs) Which seems reasonable, but the accepted solution is to use the built in encryption of MS Office to secure the file with a password and then email the file. And then send the password in another email. Honestly, that's supposed to be the protocol. The policy also hasn't been amended in any way to account for implementing Google docs & sheets, which can be accessed with the same credentials used for email or opened on any unattended employee's machine if they left a Gmail tab open (along with anything else in their Google drive). And regardless of any of these rules, almost no one follows them. I do-- I have to, I'm a data custodian so I can't violate the rules, but it annoys people.
But I do not see any such engagement from banks.
Transaction signatures are good if well implemented, but I'm not seeing a lot of good implementations. To be effective the user needs to understand what's going on so that they're appropriately suspicious when approached by crooks.
e.g. if I just know I had to enter 58430012 to send my niece $12, I don't end up learning why and when crooks persuade me to enter 58436500 I won't spot that this is actually authorising a $6500 transfer and I should be alarmed.
It's just subtle enough (e.g. lower definition but will still play) and most people use "secure" enough setups that only techies, media gurus, or that one guy who's still using a VGA monitor connection end up noticing
Pre-Zero-Trust days seemed safer. Copying production data to a laptop wasn't allowed. Instead, each SRE had their own Linux VM in the data center, accessible from home and able to run the scripts (with connectivity to the enterprise application). This prevented a whole class of realistic attacks in which a laptop (while unlocked/decrypted) is taken by an adversary. Admittedly, in return, we're protected from a possible, but less likely, attack in which a Linux VM is compromised and used for lateral movement within one segment of the enterprise network. (An enrolled device has to be in the user's possession; it can't be any machine, Linux or Windows, in the data center or office.)
The only people who love this are our enterprise application vendors. Our bosses are paying them a TON more money to implement new requirements where, in theory, all possible types of data analysis can be done directly within the enterprise application. No more scripts, no more copying of data. No more use of Open Source. And, of course, people from these same enterprise application vendors advise the government that Zero Trust must be a top priority mandate.
(Perhaps I misunderstood and you were being sarcastic?)
The computers in any sizable business already have the pre-approved restrictions set on the OS level. Employers can’t just install any software.
Valve has taken a less heavy-handed approach and let users have more freedom over their client and UI, but they also have a massive bot problem in titles like TF2.
I can’t connect to my work network from a random client, and it will throw flags and eventually block me if I connect with an out-of-date OS version.
I can’t present any piece of paper with my banking data and a signature on it and expect other parties to accept it. I have to present it on an authorized document.
I guess money may be the common denominator here.
https://www.theverge.com/2022/1/26/22903437/id-me-facial-rec...
People extend your exact trust assertions to their networks, and bad actors exploit it to effect a compromise. A corporate network cannot be like your home. Zero Trust says that you should assume anything, and anyone, can be exploited - so secure appropriately.
Per your analogy, what would you do if your invited houseguests, unbeknownst even to themselves, wore a camera for reconnaissance by a 3rd party? What would you do if these cameras were so easy to hide that anyone, at any time, might be wearing one and you couldn't know?
You would have to assume that anyone that entered your home had a camera on them. You would give them no more access than the bare minimum needed to do whatever they were there to do (whether eat dinner or fix your sink). You'd identify them, track their movement, and keep records.
Your term, "Zero misplaced trust," assumes that you can identify where to place trust. Did you trust that system you had validated and scanned for 5 years...until Log4shell was discovered? Did you trust the 20-year veteran researcher before they plugged in a USB without knowing their kid borrowed it and infected it?
Zero Trust is a response to the failure of "trust but verify."
I worked at a place that only allowed "verified" software before and it's an ongoing battle to keep that list updated. Things like digital signatures can be pretty reliable but if you're version pinning you can make it extremely difficult to quickly adopt patched versions when a vulnerability comes out.
If the client device were compromised with a zero day exploit, the blast radius would be substantially smaller, the difficulty of an attacker mapping a network for later exploit would be exponentially larger, and time to response would dramatically shrink.
[1] (This is particularly relevant for fixed-function IoT and Operational Technology devices. General computing devices need broader controls, but again - the minimum necessary for that user, in that business context, to do their job.)
My vanilla LineageOS install fails but I can root with Magisk, enable Zygisk to inject code into Android, edit build properties, add SafetyNet fix and now my device is good to go?
It's crazy to think the workaround is "enable arbitrary code injection" (Zygisk)
You just described the usage pattern of a pilot with a family, a truck driver, a seaman, etc.
It’s only unusual if your definition of usual is “relatively rich, computer power user”.
> * An authenticating proxy
I'm having trouble understanding what the fundamental difference is between these. Is it just a matter of a single, centralized proxy at the perimeter of your service network versus in-service SSO? Is there a functional difference between being in the same process space versus a sidecar on the same host versus a service on another host?
Ultimately it boils down to trusting the authority, whether that's a function (code review), a shared library (BOM), an RPC (TLS), a sidecar (kernel), or a foreign service (mTLS). There are different strengths and weaknesses for each of these, but it's not clear to me that the options you would prefer are distinctly more or less secure -- maybe there is an argument for defense in depth, but I'm not certain that's what you're pitching.
Login.gov also supports WebAuthn.
I travelled a lot for work, and never had issues with account access. Nor did my wife ever have issues related to accounts. We don't share Google accounts though. It sounds like that user has personal accounts being used by three people for business use... Which isn't "A seaman and his family".
Google's login protection mechanisms seem to be satisfied by TOTP usage, and you won't be locked out anymore (or at least much less likely to be).
I too have no faith of seeing this stuff implemented anytime soon...
[1] (Authority to Operate, basically approval from the highest IT authorities to utilize something on a DoD network)
This depends on how far down the rabbit hole you want to go, if it was secureboot, only signed processes can run, would that make you feel better ? If it doesn't.. what would ?
Or a more recent example: my father forgot to bring his Android phone back abroad which subsequently locked him out of his account/services; had to wipe it for him to get his access back.
https://developer.android.com/training/safetynet/attestation
If FedRAMP qualification is tied to IPv6 support, you'll see every major contractor and cloud provider support it promptly.
If you look at the recent updates for cloud providers - AWS and GCP support for IPv6, Kubernetes going dual stack by default - you can see that this memo had a substantial impact.
Sometimes these things take time, but in this case, the recent memo you link to lit a fire under everyone.
edit: The source of my claim that governments tend to extend surveillance is pretty well documented I believe. So much so that I believe it is worthy to insert the problem into debates about anything relating to security. Because security often serves as the raison d'être for such ambitions.
But I’m afraid the basic prerequisite of secure transaction signing (“what you see is what you sign”) cannot be fulfilled on a generic “FIDO2 authenticator” – you need the authenticator to have a display. Sure, Windows Hello / Android FIDO / … might support this, but your common hardware Yubikey cannot.
I don’t know to which authentication method used by which bank in which country you refer in your “58430012” example, but this is definitely nothing which could be used as a method of transaction signatures in banks here, and it does not fulfill the requirements of the PSD2 regulation.
Yes. Everyone having their own distinct accounts is a property of high computer literacy in the family.
Many of my older extended family members have a single email account shared by a husband and wife. Or in one case the way to email my aunt is to send an email to an account operated by a daughter in a different town. Aunt and daughter are both signed in so the daughter can help with attachments or “emails that go missing”, etc.
> Which isn't "A seaman and his family".
The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
In practice, geoip "regions" like used for this _are_ on the larger scale, yes; However that still lets you ask valuable questions like "why is this user who logs in from Vermont, USA suddenly in Hungary?" and potentially do something proactive like limiting that session's resource access until a new MFA challenge has been passed, or ( more aggressively ) destroy the session or otherwise force a full reauth.
The downside is that this almost always relies on some actively maintained geoip database ( ala maxmind ), which... well it isn't exactly cheap, and it isn't exactly perfect, either ( see: maxmind historically putting IPs in a central location when lacking specific data )
Ultimately it's one part of ( what should be ) a suite of checks for anomalous behavior, not something to blindly implement. The latter can cause a great deal of grief if your tolerances are too tight or your proactive actions aren't in line with the activity/abuse they're intended to mitigate.
A pretty direct example of proactive action would be restricting access to using saved payment methods on a platform until the user has completed a new 2fa challenge.
You could require this challenge every time the user wants to buy something, yes, but as that will probably impact checkout rates, you could get many ( if not most ) of the same fraud prevention benefits by only challenging when the session has moved some threshold distance - It won't stop them from buying something at home or a coffee shop in town, but would stop a session hijacker on an opposing coast or another country from doing so.
https://stats.dnssec-tools.org/images/domains.svg
It's deployed on many more domains than MTA-STS.
The practices which they did come up with have been terrible, even harmful (e.g. changing passwords often and using special symbols).
Of course if you teach the wrong thing, you will get the wrong results.
As for lost luggage, I carry mine on my keychain, another one in the laptop itself (USB-C Yubikey) and one in my safe at home - if all three are ever destroyed or lost I also have backup codes available as password protected notes on several devices.
You need a bank account to do basically anything and yet consumer banking is largely unregulated (in the consumer relation sense, they are regulated on the economic side of course). Payments take upwards of 24h and only during work hours (?!?), there are no "easy switch" rewuirements, mobile apps use shit like SafetyNet and I've had banks legit tell me "just buy a phone from this list of manufacturers"... PSD2 is trash that only covers B2B interoperability and mandates a security method that has been known as broken since its invention (SMS 2FA).
> they're not obligated to deal with whatever insecure garbage you turn your phone into
Banks probably should be obligated to let you connect over standard protocols.
If you don't own your own device and rely on third-party devices to access the service, good luck to you...
As usual with the "personas" scenarios, people creates their unrealistic scenario (just like when talking about UX or design). These personas you are describing will probably fall back to low-tech methods in most of the cases, they won't fail to take a plane because GMail locked them out due to unusual activity when they are trying to show the ticket QR in the airport. They will just print it (or have someone print it for them) beforehand.
> The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
You just missed to add that they use their shared email to communicate between them by using the "Sent" folder. To be more realistic, the seaman right after buying his Android phone will create without realizing a new Google account because he doesn't probably know that he could use the email account he is already using at home. But, enough with made-up examples to prove our own points.
Depends what you think big corporations' centrally managed IT equipment is like.
Theoretically, it could mean you get precisely the right tools to do your job, with the ideal maintenance and configuration provided effortlessly.
But for some organisations, it means mandatory Internet Explorer and Flash for compatibility with some decrepit intranet, crapware like McAfee that slow the system to a crawl, baffling policies like not letting you use an adblocker, and regular slow-to-install but unavoidable updates that always happen just as you're giving that big presentation.
> They will just print it (or have someone print it for them) beforehand.
Yes, they will do that precisely because they do not trust technology to work for them because it frequently does not! I have family members like this. I log in to their accounts on my devices for various reasons. Even worse, I run Linux. We run in to these problems frequently. Spend time helping technically illiterate people with things. While doing so, make a concerted effort to understand why they say or think some of the things that they do.
Edit to add, I find it amusing that you make fun of his seaman example. Almost that exact scenario (in terms of number of devices, shared devices, and locations) is currently the case for two of my relatives. Two! And yet you ridicule it.
Azure is so far behind on this it's silly.
I mean, sure, my bank in Norway has my account tied to a person number, but they don't actually know that when I log in with bankid that I really am the person associated with that person number. --Theoretically the post office was supposed to verify my identity before they gave me the packet containing the code brick, but they forgot to do so - this was over 10 years ago before they had to register the ID details.
So basically I have a highly trusted way of authenticating to financial and government services in Norway even though nobody actually knows that I am who I claimed to be when I opened the bank account, setup bankid, etc.
A bunch of situations aren't going to end up with a separate physical authenticator anyway, they'll do WebAuthn, which in principle could be a Yubico Security Key or any of a dozen competitor products - but actually it's the contractor's iPhone, which can do the exact same trick. Or maybe it's a Pixel, or whatever the high-end Samsung phone is today.
That's what standardisation gets us. If CoolPhone Co. build a phone that actually uses a retina scan to unlock, they can do WebAuthn and deliver that security to your systems without you even touching your software. And yes, in the Hollywood movie version the tricky part is the synthetic eyeball so as to trick the retina scanner, but in the real world the problem is after you steal the ambassador's CoolPhone she can't play Wordle and she reports the problem to IT before you can conduct your break-in, synthetic eyeball or not.
I have three bank accounts here:
One of them (my good bank) has a chiclet keypad physical authenticator which needs these manual codes entering to get a value back that proves I used the authenticator.
The large European bank that handles my salary and so on, relies on SMS entirely, I ask to perform a transaction, they send an SMS with a code, I type it into a box on the web site. The SMS is trying to tell me what that transaction is, and has improved (it used to say things like GBP20000 which, yes everybody on Hacker News knows what that means but I bet my grandmother wouldn't, today it says £20 000 which is easier to understand) but notice that the code you get isn't related to the transaction details, it's just an arbitrary code. So I needn't understand the transaction to copy-paste the code.
The third bank is owned by the British government and so is inherently safe with unlimited funds unlike a commercial bank (they can and do print money to fund withdrawals, they're the government) but they too use SMS and their SMS messages are... not good. Of course unlike a commercial bank if they get fined for not obeying security rules that's the government fining the government, who cares?
FIDO would be obviously better than the latter two, and I don't see any reason that (with some effort) it couldn't improve on the first one as well.
Currently, you'd have to do find an unlocked phone, hope there is a downloadable factory image, re-flash, re-lock, re-install to run whatever needs attestation. Potentially using something like Android's DSU feature, this could all be a click or two, and you could be back running Lineage with a restart.
Encrypted messaging has been a complete failure; there is no need to single out email. I suspect the reason is more or less the same in all cases. Users have not been provided with a conceptual framework that would allow them to use the tools in a reasonable way. If the US federal government can come up with, and promote such a framework the world would become a different place.
BTW, the linked article is mostly based on misconceptions:
If someone just put a fake domain that proxies everything between you and the server (with fake domain with HTTPS... which he social engineered you to get on)
Looks like FIDO2 2FA only sign the challenge response against the server certificate available locally (= the phishing domain) so just passing it to the original server will fail. Also, the attacker can't just re-sign the challenge response after you, because the challenge was sent from the original server already encrypted with the public key of the user (stored from the registration phase). So only the registered user can see the challenge and respond to it.
This leaves only 2 options to do a phishing attack: 1) Get a valid certificate for the original domain [1] 2) force downgrade the user to old TOTP [2]
E.g. with an credit card.
Due to the way it integrates into websites (or more specifically doesn't) classical approaches like SMS 2FA (insecure anyway) but also TOTP or FIDO2 do not work.
Instead a notification is send to a preconfigured app where you then confirm it.
Furthermore as the app and payment might be on the same device the app uses the fingerprint reader/(probably some Google TPM/secrets API idk.).
Theoretically other approaches should work, but practically they tend to not work reliable or at all in most situations.
Technically web based solutions could be possible by combining a FIDO stick with browser based push notifications, practicality they (Banks) bother or there are legal anoyences.
Can you elaborate on why you see it this way? WhatsApp has been wildly successful, my very non-technical in-laws use Signal for their family's conversations, and other messaging platforms are jumping on the bandwagon.
As far as I can tell, if we lose encrypted messaging at this point, it will be due to government action or corporate rug-pulling, not because it failed to catch on. Whereas encrypted email really hasn't caught on anywhere.
I think we'd do well to provide the option to use open protocols when possible.
Of course, the PR copy just writes itself, doesn't it? AD administrators, Apple and Google, banks and everyone else can benefit from context aware authorization.If the state of your phone is stolen or "compromised", you want immediate Peace of Mind.
Even if it's just misplaced, having that kind of flexibility is just great.
This is already the case with Netflix -- 4k video content cannot be played on Linux.
Much appreciate the suggestion!
You only get effective end to end encryption if you can verify that you are talking to who you think you are talking to. Otherwise the people that are running the system can cause your messages to take an unencrypted detour and thus be able to read them. This is often called a man in the middle attack. Verifying identities normally means checking some sort of long identity number. Very few people know how to do that in an effective way.
For example: in a usability study involving Signal[1], 21 out of 28 computer science students failed to establish and maintain a secure end to end encrypted connection. The usability of end to end encrypted messaging is a serious issue. We should not kid ourselves into thinking it is a solved issue.
PGP in a sense is actually better here in that it forces the user to comprehend the existence of a key in a way where it is intuitively obvious that it is important to know where that key came from.
[1] https://www.ndss-symposium.org/wp-content/uploads/2018/03/09...
1) The requirements themselves. These are different for consumer vs employee type scenarios. So general, I'd prefer we err on the side of DRM free for things like media, but there are legitimate concerns around things like data privacy when you are an employee of an organization handling sensitive data.
2) Presuming there are legitimate reasons to have strong validation of the user and untampered software, we have the choice of A) using only organization supplied hardware in those case or B) using your own with some kind of restriction. I'd much prefer to use my own as much as possible ... if I can be ensured that it won't spy on me, or limit what I can do, for the non-organization specific purposes I've explicitly opted-in to enable.
> I'm uncomfortable letting organisations have control over the software that runs on my hardware.
I'm not, if we can sandbox. I'm fine with organizations running javascript in my browser for instance. Or running mobile apps that can access certain data with explicit permissions (like granting access to my photos so that I can share them in-app). I think we can do better with both more granular permissions, better UX, and cryptographic guarantees to both the user and the organization that both the computation and data is operating at the agreed level.
it fails to do so in many ways, including not blocking old, no longer maintained, known to be vulnerable android releases
it also has little to do with moding and more with having a proper working free marked which allows alternatives besides Google and Apple
Also this is about the second factor in 2FA not online banking.
Which you can do on a completely messed up computer.
I'm also not asking to be able to do pay contactless with a degoogled Android phone.
Similar I'm but asking to not have 2FA, you can use stuff like a FIDO stick with your phone.
Most of this "security" features are often about Banks pretending to have proper 2FA without a second device... (And then applying them to other apps they produce, too).
The various auth apps are problematic because they usually come with some kind of requirement for intune or similar to do remote attestation. That's a weird place for the government to be with contractors, since a lot of those contacts don't have language requiring that contractors have a phone at all, much less that they allow the federal government to MDM it.
It could be providers other than yubico, but it won't be.
Any system can have malware. That's not the point. To repeat my point again: client restrictions are about making sure user devices are not unusually vulnerable to malware. For example, any Windows device may be infected with malware, but if you're still running Windows XP you're vulnerable to a much larger variety of known malware and more severe exploits. Hence why businesses will want to support only modern versions of eg Chrome which itself will require modern versions of operating systems.
Even giant orgs like Google who should be good at this will fail at this. I've had services with their Cloud Armor set to disallow connectivity from non-US connections, and yet connections in the US get flagged as non-US even when a traceroute shows no hops going overseas.
Android will block non-Play-Store app installations by default, and root is required for lower level access/capabilities that can bypass the normal sandbox.
I'm honestly not sure what you're saying about 2FA in the rest of your comment, it's kind of vague and there are some possible typos/grammar issues that confuse me. What exactly are you referring to when you say "pretending to have proper 2FA"?
Maybe because the new login is from a hacker. Maybe because your geoip database provider is unreliable. Either one is likely. There's no sure way to go from an IP address to a location.
Presumably (hopefully) these are corporate-owned devices, with a policy like that. Remote attestation is fine if it's controlled by the device's owner, and you can certainly run free software on such a device, if that particular build of the software has been "blessed" by the corporation. However, the user doesn't get the freedoms which are supposed to come with free software; in particular, they can't build and run a modified version without first obtaining someone else's approval. At the very least it suggests a certain lack of respect for your employees to lock down the tools they are required to use for their job to this extent.
The Signal study showed that the majority of people were unable to understand Signal's security features, but not that the security model is broken. The question at hand isn't how many people are using it wrong but how many people are using it right that never could have managed to do so with PGP keys. If even 10% of Signal's users successfully maintain a secure channel, you're looking at around 5 million people, most of whom probably would not have been able to set up secure messaging without Signal.
Do we still have work to do? Of course! But that doesn't mean that we've failed in our efforts so far.
I'm not asking to use a 10 year old version of android that no modern browsers support any more and is missing many security features.
I wish I had made it more clear in my original post that Impossible Traveler checks are not a magic bullet, as most are assuming that this would be used all on its own for whether to bar access.
[1] https://www.usenix.org/legacy/events/sec99/full_papers/whitt...
[1] https://people.eecs.berkeley.edu/~tygar/papers/Why_Johnny_Ca...
No, you basically have to click on ok once (or change a setting, depending on phone), either way it doesn't require root, and doesn't really change the attack scenario as it's based one someone intentionally installing an app from an arbitrary not-trusted source.
> root is required
Yeah, like privilege escalation attacks. As you will likely find in many compromised apps. And which on many Android phones work due to vendors not providing updates after some time. And many other reasons.
> What exactly are you referring to when you say "pretending to have proper 2FA"?
EU law says they need to provide 2FA for only banking.
Banks often don't do that for banking apps as it's inconvenient. Instead they "split the banking app in two parts" and maybe throw some finger pint based auth mechanism in and claim they have proper 2FA auth. (Because it's two app processes running and requires the fingerprint.) Through repeatedly security researchers have shown that its not a good idea.
Additionally they then require you to only use your fingerprint, not an additional password....
Either way, the point is that secure online banking doesn't requires locked down devices in general.
Checking for an too old & vulnerable is where you start.
And then you can consider to maybe also block other stuff.
There is nothing inherently less secure about an rooted device.
Sure you can make it less secure if you install bad software, but you can also make it more secure.
Or you just need to lower the minimal screen brightness for accessibility reasons.
Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
But if we say that is ok, then we first need to start to ban cars, because you could drive into a wall with it, and knifes, also no way to have a bath tube you could drown yourself.
And yes that is sarcastic, but there is a big difference between something being "inherently insecure" (driving without belt) or by default is in no way less secure as long as you don't go actively out of your way to make it less secure (by e.g. disabling security protections).
And for many of the SaaS that we use, TOTP doesn't help you avoid the security lock outs.
I think in 2019 I was able to use IPv6 VNETs.
I hold the reverse view. The only security token I'd trust is the only thing that isn't open is the private keys the device generates when you press the reset button. The rest meaning from the CPU up (say RISC-V) and the firmware must be open to inspection by anybody. In fact, it should also be easy to peel away the silicon protection so you can see everything bar the cells storing the private keys. The other non-negotiable is the thing that computes and transmits the "measures" of the system being attested to (including it’s own firmware) can not be changed - meaning no stinking "security" patches are allowed at that level. If it's found broken, throw it away as the attestation is useless.
The attestation then becomes the device you hold is faithful rendering / compiling of open source design document X by open source compiler Y. And I can prove that myself, by doing building X using Y and verifying the end result looks like the device I hold. This process is also known as reproducible builds.
What we have now (eg, YubiKeys) is not that. Therefore I have to trust Yubi Corp. To see what that's a problem, see the title of this story. It has the words "Zero-Trust" in it.
In reality of course there is no such thing as "Zero-Trust". I will never be able to verify everything myself, ergo I have to trust something. The point is there is a world of difference between trusting an opaque black box like Yubi Corp, and trusting an open source reproducible build, where a cast of random thousands can crawl over it and say, "it seems OK to me". In reality it's not the ones that say "it seems OK" you are trusting. You are trusting the mass media (places like this in other words), to pick up and amplify the one voice among millions that says "I've found a bug - and because it's open I can prove it" so everyone hears it.
So to me it looks to be the reverse of what you say. Remote attestation won't kill software freedom. Remote attestation, done in a way that we can trust, must be built using open source. Anything less simply won’t work.
No one wants to reproduce an attestation. If you could, it could be copied, and if you can copy an attestation any hardware could send it to prove it was something else - something the other end trusts, rendering is useless for it's intended purpose.
However, the attestation is attesting the hardware you are running on is indeed "reproduced", as in it is a reliable copy of something the other end trusts. It could be a device from Yubi Key and in effect you are trusting Yubi Corp's word on the matter. Or, it could be an open source design everybody can inspect, reproducibly rendered in hardware and firmware. Personally, I think trusting the former is madness, as is trusting the latter without a reproducible build.
I'd expect it to be any mechanism that doesn't do mutual authentication. In other words the authentication not only proves to the service your "you", it also proves to you the service is the one you think you are authenticating to. And it does that reliably even in the face of a MITM attack.
It's damned hard to do, and obviously none of SMS, TOTP and passwords do it. https + passwords was supposed to do it and technically does do it, but in practice no one looks at the domain name. Email + DKIM could do it, but no email client shows you outcome of DKIM auth and again no one would look at that anyway.
WebAuthn / FIDO2 does do it. It's undoubtedly the best option right now, but until tokens that open source + reproducible build right down to the metal, they aren't "Zero-Trust". You are forced to trust Yubi or Google or whatever as the tokens they give you are effectively black boxes. Worse, because an open source token means "easily build-able many companies" and thus means "WebAuthn tokens become a commodity", I expect Yubi to fight it to their dying breath.
That was poorly worded in the article. Among the things it is is saying you should give to your users is a WebAuthn token. Inside the WebAuthn token is a random private key it never reveals. That is the thing the authentication "you are you" ultimately relies on, and it is very much a "long lived credential".
What he is trying to say is more complex. It's something along the lines of "you go to some authentication / authorisation service, prove you're you and say you want access to a service, and it hands you back some short term credentials you can provide to that service allowing you to use it". You, the authentication provider, and the service you trying to access might be in different countries. The danger in that scenario is someone might steal those credentials while they are in transit. One way to mitigate that is to ensure those credentials don't last for very long.
So, it's a statement about how distributed systems should handle passing credentials among themselves. The user never sees these credentials, and of course never has to remember them. Any temporary credential lasting longer than a persons sleep/wake cycle it considered broken in this world, but it's understood the user will carry with them a relatively long lived way of proving they are who they say they are.
I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
Edit: Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users.
Good security is layered. Just because privilege escalation attacks are sometimes possible without root doesn't mean you throw open the floodgates and ignore the threat of root. The point of banning rooted devices is that privilege escalation attacks are much easier in rooted devices.
Of course online banking doesn't require locked down devices, but online banking is more secure in locked down devices. I don't see why banks should weaken their security posture on root just because they aren't perfect in other areas.
The motivation is not "just" that, or for fun, the motivation is that users should be allowed to control their own devices. And have them keep working.
> I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
I want it to work... exactly like app permissions. Where if I root it, I can override things.
> Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users
Having that kind of sysadmin lockdown is useful, but if I want to be my own sysadmin I shouldn't be blacklisted by banks.
This is clearly wrong, rooted devices are much more insecure because they enable low level access to maliciously alter the system. Malware often requires root and will first try to attempt to attain root, which of course isn't necessary if a user has manually unlocked root themselves.
> Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
No one is taking away any user's agency. Users are free to root their phones if they wish (many Android phones at least will allow it), but companies are also free to deny these users service. Users are free to avail themselves of any company's service on a non-rooted phone. "Not using rooted phones to access anything you like" is hardly a major loss of agency.
Phone insecurity is very dangerous IMO, much more dangerous really than bathtubs or perhaps knives. You could argue that vehicles are similarly very dangerous and I'd agree. I don't think we're very far off from locked down self-driving cars. Unfortunately we're not there yet with self-driving tech and the current utility of vehicles still outweighs their immense safety risks. You can't really say that about rooted phones. The legitimate benefits of a rooted phone are largely relevant to developers, not the average user, and most users never attempt to tinker with their phone.
SMS authentication is... well by one reading of PSD2, it's not acceptable. But in real world, it is basically necessary, and not _that_ insecure (if you ignore SIM swapping attacks etc.). The WYSIWYS aspect comes not from the code but from the message text, which is crucial (and per PSD2, should include at least the amount and... receiver? I forgot). But sure, if people don't read or understand the message, it's not ideal...
While FIDO provides better phishing resistance (than SMS, not necessarily than authentication apps), it doesn't protect against transaction modification (e.g. man in the browser) and for people who care about and understand security, it is strictly worse.
If you can't proceed with a normal life after you root you phone you are NOT free to do so but instead get punished when doing so.
'man in the browser' seems like a situation where the user's device is compromised. In that case it is not big stretch that not only browser could be compromised, but also SMS reading app is compromised.
I.e., the reasonable security request should not be security against 'man in the browser', but security against 'user device is compromised'. In that case SMS is worse, as attacker could completely bypass it, while for FIDO it still need to phish the user to press the button.
Very dubious. The trick to phishing is that humans are easily confused about what's going on, and WebAuthn recruits the browser to fix that completely. Your browser isn't confused, the browser knows it is talking to fakebank.example because that's the DNS name which is its business, even if this looks exactly like the Real Bank web site, perfect to the pixel and even fakes the browser chrome to have a URL bar that says realbank.example as you expected.
I don't see bank authentication apps helping here. It's very easy to accidentally reassure the poor humans everything is fine when they're being robbed, because the authentication part seemed to work.
I'm somebody who really cares about and would like to think they understand security very much, and I don't think it's strictly worse at all.
One of the things banks have an ongoing problem with is insider facilitated crime. Which means secrets are a big problem, because the bank (and thus, crooked staff working for the bank) know those secrets. Most of these PSD2 "compliant" solutions rely on secrets, and so are vulnerable to bank insiders. FIDO avoids that because it doesn't rely on secrets†.
† Technically a typical Security Key has a "secret" key [typically 256-bit AES] baked inside it, but a better word would be symmetric rather than secret, there is no other copy of that symmetric key, so it isn't functionally secret.
> If you can't proceed with a normal life after you root you phone you are NOT free to do so but instead get punished when doing so.
Freedom to root doesn't mean freedom from the consequences of rooting. Banking apps are hardly necessary for a normal life, and neither is rooting.
Whoa, I did not know this. That's wild.
That's a fair point, agreed. Privacy needs to be legally recognized as a strong right before we allow more centralization of this sort of thing. (Though sadly it's already pretty centralized, just not by the federal government.)
The hole in this reasoning is that you don't need the app; you can just sign into the bank's website from the mobile browser, and get all the same functionality you'd get from the app. (Maybe you don't get a few things, like mobile check deposits, since they just don't build features like that into websites for the most part.) The experience will sometimes be worse than that of the app, but you can still do all the potentially-dangerous things without it. So why bother locking down the app when the web browser can do all the same things?
> I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it
I actually canceled my HBO Max account when, during the HBO Now -> HBO Max transition, they somehow broke playback on Linux desktop browsers. When I wrote in to support, they claimed it was never supported, so they weren't obligated to care. I canceled on the spot.