zlacker

A case against security nihilism

submitted by feross+(OP) on 2021-07-20 19:18:11 | 468 points 332 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
24. static+Di[view] [source] 2021-07-20 20:50:05
>>feross+(OP)
Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

◧◩
36. ignora+El[view] [source] [discussion] 2021-07-20 21:04:57
>>static+Di
I'll conclude with a philosophical note about software design: Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administators; but it's time for software engineers to start applying the same engineering principles within individual applications as well.

-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....

53. Veserv+5q[view] [source] 2021-07-20 21:26:18
>>feross+(OP)
The article correctly refutes the silly binary argument that many people fall back on that since perfection is impossible, we must accept an imperfect solution. And since the current solutions are clearly imperfect, the status quo must be acceptable since imperfect solutions are acceptable.

However, the article falls right into the next failed model of considering everything in terms of relative security. We should make things “better”, we should make things “harder”, but those terms mean very little. 1% better is “better”. Making a broken hashing function take 2x as long to break makes things “harder”, but it does not make things more secure since it is already hopelessly inadequate. The problem with considering things only in relative terms to existing solutions is that it ignores defining the problem, and more importantly, it does not tell you if you solved your problem.

The correct model is the one used by engineering disciplines, specifying objective, quantifiable standards for what is adequate and then verifying the solution passes those standards. Because if you do not define what is adequate, how do you know if you have even achieved the bare minimum of what you need and how far your solution may be from that.

For instance, consider the same NSO case as the article. Did Apple do an adequate job, what is an adequate job, and how far away are they?

Well, let us assume that the average duration of surveillance for the 50,000 phones was 1 year per phone. Now what is a good level of protection against that kind of surveillance? I think a reasonable standard is making it so the phone is not the easiest way to surveil a person for that length of time, it is cheaper to do it the old fashioned way, so the phone does not make you more vulnerable on average. So, how much does it cost to surveil a person and listen in on their conversations for a year the old fashioned way? 1k, 10k, 100k? If we assume 10k, then the level of security needed to protect against NSO type threats and to adequately protect against surveillance is $500M.

So, how far away is Apple from that? Well, Zerodium pays $1.5M per iMessage zero click [1]. If we assume they burned 10 of them, infecting a mere 5k per with a trivially wormable complete compromise, that would amount to ~15M at market price. Adding in the rest of the work, it would maybe cost $20M all together worst case. So, if you agree with this analysis (if you do not feel free to plug in your own estimates), then Apple has achieved ~4% of the necessary level and would need to improve processes by 2,500% to achieve adequate security against this type of attack. I think that should make it clear why things are so bad. “Best in class” security needs to improve by over 10x to become adequate. It should be no wonder these systems are so defenseless.

[1] http://zerodium.com/program.html

◧◩◪
92. mrtest+fx[view] [source] [discussion] 2021-07-20 22:14:12
>>tptace+Xf
>Nobody has any credible story for how regulations would prevent stuff like this from happening.

We do have some of those already.

https://www.faa.gov/space/streamlined_licensing_process/medi...

◧◩◪◨
109. rrdhar+KB[view] [source] [discussion] 2021-07-20 22:53:50
>>TaupeR+OA
I’d wager Dropbox’s Magic Pocket is up there with equivalent C/C++ based I/O / SAN stacks:

https://dropbox.tech/infrastructure/extending-magic-pocket-i...

◧◩◪◨⬒
131. stevek+XI[view] [source] [discussion] 2021-07-21 00:05:14
>>wahern+cI
First Microsoft, then two different teams at Google, and then Mozilla, and then someone else, all found that roughly 70% of security vulnerabilities reported in their products are due to memory unsafety issues. That roughly that number keeps coming up across all of the biggest companies in our industry lends it some weight.

Here's the first Microsoft one: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...

And Chrome: https://www.zdnet.com/article/chrome-70-of-all-security-bugs...

◧◩
137. roywig+iK[view] [source] [discussion] 2021-07-21 00:17:23
>>ENOTTY+gE
The NSO target list has like 15,000 Mexican phone numbers on it. You don't think making exploits more expensive would force attackers to prioritize only the very highest value targets?

In the limit, a trillion dollar exploit that will be worthless once discovered will only be used with the utmost possible care, on a very tiny number of people. That's way better than something that you can play around with and target thousands.

https://www.theguardian.com/news/2021/jul/19/fifty-people-cl...

◧◩◪◨⬒⬓
141. roywig+SK[view] [source] [discussion] 2021-07-21 00:23:56
>>Animal+Iy
the original tale of international cyber espionage was accomplished via a satellite link

https://en.wikipedia.org/wiki/The_Cuckoo%27s_Egg_(book)

◧◩◪◨
142. o8r3oF+zL[view] [source] [discussion] 2021-07-21 00:31:05
>>static+Dv
"But it's totally legitimate if that's your threat model."

Not mine. I have no plans to purchase a security key from Google. I have no threat model.

Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.

1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...

2.

https://support.google.com/googlenest/thread/14123369/what-i...

https://www.inc.com/jason-aten/google-is-absolutely-listenin...

https://www.consumerwatchdog.org/blog/people-dont-trust-goog...

https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...

https://www.theguardian.com/technology/2020/jan/03/google-ex...

https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...

◧◩
144. kragen+KL[view] [source] [discussion] 2021-07-21 00:33:13
>>ENOTTY+gE
Computer security isn't a board game where my unit can Damage your unit if my unit has more Combat than your unit has Defense, and once your unit is Damaged enough you lose it, and you can buy a card with 5 Combat for 5 Gold, and so on. It's not a contest of strength. It's not about who has the most gold. It's about who fucks up.

If you follow the guidelines in http://canonical.org/~kragen/cryptsetup to encrypt the disk on a new laptop, it will take you an hour (US$100), plus ten practice reboots over the next day (US$100), plus 5 seconds every time you boot forever after (say, another US$100), for a total of about US$300. A brute-force attack by an attacker who has killed you or stolen your laptop while it was off is still possible. My estimate in that page is that it will cost US$1.9 trillion. That's the nature of modern cryptography. (The estimate is probably a bit out of date: it might cost less than US$1 trillion now, due to improved hardware.)

Other forms of software security are considerably more absolute. Regardless of what you see in the movies, if your RAM is functioning properly and if there isn't a cryptographic route, there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T. It's like trying to find a number that when multiplied by 0 gives you 5. The money you spend on attacking the problem is simply irrelevant.

Usually, though, the situation is considerably more absolute in the other direction: there are always abundant holes in the protections, and it's just a matter of finding one of them.

Now, of course there are other ways someone might be able to decrypt your laptop disk, other than stealing it and applying brute force. They might trick you into typing the passphrase in a public place where they can see the surveillance camera. They might use a security hole in your browser to gain RCE on your laptop and then a local privilege escalation hole to gain root and read the LUKS encryption key from RAM. They might trick you into typing the passphrase on the wrong computer at a conference by handing you the wrong laptop. They might pay you to do a job where you ssh several times a day into a server that only allows password authentication, assigning you a correct horse battery staple passphrase you can't change, until one day you slip up and you type your LUKS passphrase instead. They might steal your laptop while it's on, freeze the RAM with freeze spray, and pop the frozen RAM out of your motherboard and into their own before the bits of your key schedule decay. They might break into your house and implant a hardware keylogger in your keyboard. They might do a Zoom call with you and get you to boot up the laptop so they can listen to the sound of you typing the passphrase on the keyboard. (The correct horse battery staple passphrases I favor are especially vulnerable to that.) They might remotely turn on the microphone in your cellphone, if they have a way into your cellphone, and do the same. They might use phased-array passive radar across the street to measure the movements of your fingers from the variations in the reflection of Wi-Fi signals. They might go home with you from a bar, slip you a little scopolamine, and suggest that you show them something on your (turned-off) laptop while they secretly film your typing.

The key thing about these attacks is that they are all cheap. Well, the last one might cost a few thousand dollars of equipment and tens of thousands of dollars in rent. None of them requires a lot of money. They just require knowledge, planning, and follow-through.

And the same thing is true about defenses against this kind of thing. Don't run a browser on your secure laptop. Don't keep it in your bedroom. Keep your Bitcoin in a Trezor, not your laptop (and obviously not Coinbase), so that when your laptop does get popped you don't lose it all.

You could argue that, with dollars, you can hire people who have knowledge, do planning, and follow through. But that's difficult. It's much easier to spend a million (or a billion, or a trillion) dollars hiring people who don't. In fact, large amounts of money is better at attracting con men, like antivirus vendors, than it is at attracting people like the seL4 team.

Here in Argentina we had a megalomaniacal dictator in the 01940s and 01950s who was determined to develop a domestic nuclear power industry, above all to gain access to atomic bombs. Werner Heisenberg was invited to visit in 01947; hundreds of German physicists were spirited out of the ruined, occupied postwar Germany. National laboratories were built, laboratory-scale nuclear fusion was announced to have been successful, promises to only seek peaceful energy were published, plans for a nationwide network of fusion energy plants were announced, hundreds of millions of dollars were spent (in today's money), presidential gold medals were awarded...

...and finally in 01952 it turned out to be a fraud, or at best the kind of wishful-thinking-fueled bad labwork we routinely see from the free-energy crowd: https://en.wikipedia.org/wiki/Huemul_Project

Meanwhile, a different megalomaniacal dictator who'd made somewhat better choices about which physicists to trust detonated his first H-bomb in 01953.

◧◩◪◨
145. kragen+gN[view] [source] [discussion] 2021-07-21 00:47:57
>>drran+EC
Buffer overflows are older than C.

One of the reasons for the decline of the British computer industry was Tony Hoare, at one of the big companies (Elliott Brothers, later part of ICL), implemented Fortran by compiling it to Algol, and compiled the Algol with bounds checks. This would have been around 01965, according to his Turing Award lecture. They failed to win customers away from the IBM 7090 (according to https://www.infoq.com/presentations/Null-References-The-Bill...) because the customers' Fortran programs were all full of buffer overflows ("subscript errors", in Hoare's terminology) and so the pesky Algol runtime system was causing them to abort!

◧◩◪◨
147. Aussie+IN[view] [source] [discussion] 2021-07-21 00:52:31
>>User23+xI
>No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong.

Perhaps not with building properties, but very small errors can cause catastrophic failure.

One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.

https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...

In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.

Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.

It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.

◧◩◪
159. 3gg+9Q[view] [source] [discussion] 2021-07-21 01:16:53
>>api+FI
https://xkcd.com/538/
◧◩◪◨⬒
160. nyanpa+aQ[view] [source] [discussion] 2021-07-21 01:17:32
>>pa7ch+nN
I've seen Qt Creator segfault due to the CMake plugin doing some strange QStringList operations on an inconsistent "implicitly shared" collection, that I guess broke due to multithreading (though I'm not sure exactly what happened). In RSS Guard, performing two different "background sync" operations causes two different threads to touch the same list collections, producing a segfault. (These are due to multiple threads touching the same collection/pointers; racing on primitive values is probably less directly going to lead to memory unsafety.)

Apparently in Golang, you can achieve memory unsafety through data races: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... (though I'm not sure if a workaround has been added to prevent memory unsafety).

◧◩◪◨
189. meowfa+e21[view] [source] [discussion] 2021-07-21 03:13:38
>>UncleM+kK
>Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.

It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident

>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.

>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".

>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.

◧◩
192. static+v31[view] [source] [discussion] 2021-07-21 03:29:05
>>o8r3oF+N21
> 1. Blaming the language instead of the programmer will not lead to improved program quality.

I disagree. Blaming the language is critically important. Tony Hoare (holds a turing aware, is a genius) puts it well.

> a programming language designer should be responsible for the mistakes that are made by the programmers using the language. [...]

> It's very easy to persuade the customers of your language that everything that goes wrong is their fault and not yours. I rejected that...

[0]

> Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages

Users will always write C. No they won't always be smaller and faster.

> 3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code

Much to society's loss, I'm sure.

> and may in fact be taught to fear it

Cool. Same way we teach people to not roll their own crypto. This is a good thing. Please be more afraid.

> There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors.

No one cares. Not only is that not provably the case, nor is it likely the case, but it's also irrelevant when I'm typing on a computer with a C kernel, numerous C libraries, in a C++ browser, or texting someone via a C++ app that has to parse arbitrary text, emojis, videos, etc.

> Find me a buffer overflow or use-after-free in one of djb's programs.

No, that's a stupid waste of my time. Thankfully, others seem more willing to do so[1] - I hate to even entertain such an arbitrary, fallacious benchmark, but it's funny so I'll do it just this once.

[0] http://blog.mattcallanan.net/2010/09/tony-hoare-billion-doll...

[1] http://www.guninski.com/where_do_you_want_billg_to_go_today_...

◧◩◪◨⬒⬓⬔
202. pcwalt+J71[view] [source] [discussion] 2021-07-21 04:21:37
>>dralle+P11
I don't think Zig is going to be memory safe in practice, unless they add a GC or introduce a Rust-like system. All of the mitigations I've seen come from that language--for example, quarantine--are things that we've had for years in hardened memory allocators for C++ like Chromium PartitionAlloc [1] and GrapheneOS hardened_malloc [2]. These have been great mitigations, but have not been effective in achieving memory safety.

Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).

There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.

[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...

[2]: https://github.com/GrapheneOS/hardened_malloc

◧◩◪◨
208. kobebr+Bb1[view] [source] [discussion] 2021-07-21 05:09:17
>>UncleM+kK
what about https://github.com/seL4/seL4?
◧◩◪◨⬒⬓
224. scoutt+Tn1[view] [source] [discussion] 2021-07-21 07:19:06
>>UncleM+IJ
> A rust program that doesn't make use of the unsafe keyword will not have memory safety bugs

https://www.cvedetails.com/vulnerability-list/vendor_id-1902...

What if the bug is in std?

What if I use a bugged Vec::from_iter?

What if I use the bugged zip implementation from std?

You'll probably blame unsafe functions, but those unsafe functions were in std, written by the people who know Rust better than anyone.

Imagine what you and me could do writing unsafe.

Imagine trusting a 3rd party library...

◧◩◪◨
243. Peteri+lB1[view] [source] [discussion] 2021-07-21 09:35:12
>>tialar+r61
A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.

For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...

◧◩◪
246. Peteri+aC1[view] [source] [discussion] 2021-07-21 09:45:46
>>bitexp+dh
An illustrative counterexample of "if you are an actual target for state level actors you likely will know about it" is the case of Intellect Services, a small company (essentially, father and daughter) developing a custom accounting product (M.E.Doc) that assists preparation of Ukrainian tax documents.

It turned out that they were a target for state level actors, as their software update distribution mechanism was used in a "watering hole attack" to infect many companies worldwide (major examples are Maersk and Merck) in the NotPetya data destruction (not ransomware, as it's often wrongly described) attack, causing billions of dollars in damage. Here's an article about them https://www.bleepingcomputer.com/news/security/m-e-doc-softw...

In essence, you may be an actual target for state level actors not because they care about you personally, but because you just supply some service to someone whom they're targeting.

◧◩◪
249. wepple+HD1[view] [source] [discussion] 2021-07-21 10:03:40
>>x4e+bB
Yeah. Here’s a 2016 write up when Pegasus (presumably a different deployment) was leaked and reversed: https://citizenlab.ca/2016/08/million-dollar-dissident-iphon...
◧◩◪◨⬒
263. o8r3oF+jM1[view] [source] [discussion] 2021-07-21 11:22:04
>>Peteri+iE1
I think you are misattributing the source of Heartbleed. Can you point out the security issues in NaCl. It is written in C. According to this "no one can write C" logic, there must be bugs because "no one can write C". https://nacl.cace-project.eu/

The other bizarre aspect of this logic is that not only is the author of the code irrelevant but apparently the task is, too. It would appear to apply to, e.g., even the most simple programs. The only factor that matters is "written in C". I use sed every day. It's written in C. Show me the bugs. I will probably be dead before someone finds them. Will I be using a "memory-safe" sed before then.

266. Tepix+EN1[view] [source] 2021-07-21 11:33:04
>>feross+(OP)
Related: Fuck privacy nihilism

https://twitter.com/evacide/status/1416968243642724353?s=21

The same logic applies. You will not achieve perfect privacy online but there is plenty you can do to make tracking you so much harder.

◧◩◪◨⬒⬓⬔⧯▣▦
317. static+hj4[view] [source] [discussion] 2021-07-22 02:17:10
>>o8r3oF+Kg4
> Thank you for refraining from repeating this absurd hyperbole.

To be fair, I wouldn't quite label it as "absurd", though it is hyperbole. With near-extreme levels of discipline you can write very solid C code - this involves having ~100% MCDC coverage, using sanitizers, static analysis tools, and likely outright banning a few functions. It's doable, especially if your project doesn't have to move fast or has extreme requirements (spaceships).

> Can you guide me to some Rust programs that are smaller than their C counterparts.

Rust has a big standardlib compared to C++ so by default you end up with a lot of "Extra" stuff that helps deal witht hings like unicode, etc. If you drop std and dynamically link your libraries you can drop a lot of space and get down to C levels.

There are a number of examples like this: https://cliffle.com/blog/bare-metal-wasm/

> is the (apparently) enormous size of the development environment relative to a GCC toolchain.

I can't really relate to this. I have a 1TB SSD, 32GB of ram, and an 8 core CPU. My rust build tools are a single command to install and I don't know or care how much space they take up. If you do, I don't really know why, but sure that's a difference maybe.

> All programmers are not created equal no matter what languages they use

While this is true, it doesn't matter practically.

1. We can't restrict who writes C, so even if programmer skill was the kicker, it isn't enforceable.

2. There are lots of projects that invest absolutely incredible amounts of time and money, cutting edge research, into writing safe low level code. Billions are spent on this. And the problem persists. Very very few projects seem to be able to achieve "actually looks safe" C code.

> Memory-safe languages are great but it seems like they just enable people to become far too ambitious in what they think they can take on.

I don't really see how Rust is any different from Python, Java, or anything else in that regard.

[go to top]