zlacker

[return to "A case against security nihilism"]
1. static+Di[view] [source] 2021-07-20 20:50:05
>>feross+(OP)
Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

◧◩
2. crater+6q[view] [source] 2021-07-20 21:26:20
>>static+Di
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.

◧◩◪
3. tialar+DD[view] [source] 2021-07-20 23:11:14
>>crater+6q
Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.

But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?

If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.

If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.

Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.

◧◩◪◨
4. pcwalt+RU[view] [source] 2021-07-21 02:01:20
>>tialar+DD
Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain. But I agree with your overall point, and would add that you could replace Zig with any one of the dozens of popular memory safe language, even old standbys like Java. The point is not to migrate to one language in particular, but rather to migrate to languages in which memory errors are compiler bugs instead of application bugs.
◧◩◪◨⬒
5. tialar+gX[view] [source] 2021-07-21 02:20:57
>>pcwalt+RU
I could have sworn I'd read that Zig's ambition was to be memory safe. Given ten years I don't find that impossible. Indeed I gave C++ the same benefit of the doubt on that timeline. But, when I just searched I couldn't find whatever I'd seen before on that topic.
◧◩◪◨⬒⬓
6. dralle+P11[view] [source] 2021-07-21 03:08:57
>>tialar+gX
The Zig approach is "memory safe in practice" vs "memory safe in theory". They don't have any aspirations to total memory safety like Rust, but they want to get most of the way there with a lot less overhead.

Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.

Additionally the approach they've taken to allocators means that you can use special allocators for testing that can perform even more checks, including leak detection.

I think it's a great idea and a really interesting approach but it's definitely not as rigorous as what Rust provides.

◧◩◪◨⬒⬓⬔
7. pcwalt+J71[view] [source] 2021-07-21 04:21:37
>>dralle+P11
I don't think Zig is going to be memory safe in practice, unless they add a GC or introduce a Rust-like system. All of the mitigations I've seen come from that language--for example, quarantine--are things that we've had for years in hardened memory allocators for C++ like Chromium PartitionAlloc [1] and GrapheneOS hardened_malloc [2]. These have been great mitigations, but have not been effective in achieving memory safety.

Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).

There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.

[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...

[2]: https://github.com/GrapheneOS/hardened_malloc

[go to top]