This is the sort of absolutism that is so pointless.
At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.
The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.
[0] https://arstechnica.com/information-technology/2021/01/hacke...
"only by a nation-state"
This ignores the possibility that the company selling the solution could itself easily defeat the solution.
Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".
Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.
We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.
Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.
The comment on that Ars page is more realistic than the article.
Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.
How do you imagine this would work?
The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.
I think you're imagining a lot of moving parts to the "solution" that don't exist.
Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.
What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.
Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.
These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.
"That's all I know."
The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.
> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising
Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.
Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.
But personal attacks are not cool. Keep it civil, please.
Earlier you gave the example of Facebook harvesting people's phone numbers. That's not just data that's information. But a Yubikey doesn't know your phone number, how much you weigh, where you live, what type of beer you drink... no information at all.
The genius thing about the FIDO Security Key design is figuring out how to make "Are you still you?" a question we can answer. Notice that it can't answer a question like "Who is this?". Your Yubikey has no idea that you're o8r3oFTZPE. But it does know it is still itself and it can prove that when prompted to do so.
And you might think, "Aha, but it can track me". Nope. It's a passive object unless activated, and it also doesn't have any coherent identity of its own, so sites can't even compare notes on who enrolled to discover that the same Yubikey was used. Your Yubikey can tell when it's being asked if it is still itself, but it needs a secret to do that and nobody else has the secret. All they can do is ask that narrow question, "Are you still you?".
Which of course is very narrowly the exact authentication problem we wanted to solve.
If the solution to the "problem" is giving increasingly more personal information to a tech company, that's not a great solution, IMO. Arguably, from the user's perspective, it's creating a new problem.
Most users are not going to purchase YubiKeys. It's not a matter of whether I use one, what I am concerned about is what other users are being coaxed into doing.
There are many problems with "authentication methods" but the one I'm referring to is giving escalating amounts of personal information to tech companies, even if it's under the guise "for the purpose of authentication" or argued to be a fair exchange for "free services". Obviously tech companies love "authenticating" users as it signals "real" ad targets.
The "tech" industry is riddled with conflicts of interest. That is a problem they are not even attempting to solve. Perhaps regulation is going to solve it for them.