zlacker

[return to "A case against security nihilism"]
1. static+Di[view] [source] 2021-07-20 20:50:05
>>feross+(OP)
Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

◧◩
2. o8r3oF+Ht[view] [source] 2021-07-20 21:47:44
>>static+Di
From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."

"only by a nation-state"

This ignores the possibility that the company selling the solution could itself easily defeat the solution.

Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".

Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.

We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.

Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.

The comment on that Ars page is more realistic than the article.

Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.

◧◩◪
3. tialar+r61[view] [source] 2021-07-21 04:06:37
>>o8r3oF+Ht
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.

How do you imagine this would work?

The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.

I think you're imagining a lot of moving parts to the "solution" that don't exist.

◧◩◪◨
4. Peteri+lB1[view] [source] 2021-07-21 09:35:12
>>tialar+r61
A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.

For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...

◧◩◪◨⬒
5. tialar+EG1[view] [source] 2021-07-21 10:32:52
>>Peteri+lB1
That's true. Yubico provide a way to just pick a new random number. Because these are typically just AES keys, just "picking a random number" is good enough, it's not going to "pick wrong".

If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.

However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:

For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.

Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.

[go to top]