zlacker

[return to "A case against security nihilism"]
1. static+Di[view] [source] 2021-07-20 20:50:05
>>feross+(OP)
Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

◧◩
2. crater+6q[view] [source] 2021-07-20 21:26:20
>>static+Di
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.

◧◩◪
3. tialar+DD[view] [source] 2021-07-20 23:11:14
>>crater+6q
Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.

But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?

If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.

If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.

Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.

◧◩◪◨
4. SolarN+Xg3[view] [source] 2021-07-21 19:01:19
>>tialar+DD
> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.

I think people don't realize just how much modern C++ has changed.

◧◩◪◨⬒
5. static+6A3[view] [source] 2021-07-21 20:31:48
>>SolarN+Xg3
It must have changed a shitload in the last 2-3 years if that's the case. What tools are you referring to? I'm pretty familiar with C++ tooling but I haven't paid attention for a little while.
◧◩◪◨⬒⬓
6. SolarN+Ne6[view] [source] 2021-07-22 17:43:23
>>static+6A3
The modern standard library, plus some helpers is the big part of it. Compiler warnings as errors are very good at capturing bad situations if you follow the rules (e.g. don't allow people to just make raw pointers, follow a rule of 5). I never said it was as easy to do as in rust.

As for tooling, things like valgrind provide an excellent mechanism for ensuring that the program was memory safe, even in it's "unsafe" areas or when calling into external libraries (something that rust can't provide without similar tools anyway).

My broader point is that safety is more than just a compiler saying "ok you did it", though that certainly helps. I would trust well written safety focused C++ over Rust. On the other hand, I would trust randomly written Rust over C++. Rust is good for raising the lower end of the bar, but not really the top of it unless paired with a culture and ecosystem of safety focus around the language.

◧◩◪◨⬒⬓⬔
7. tialar+rf8[view] [source] 2021-07-23 11:56:12
>>SolarN+Ne6
Address sanitizers and valgrind tell you whether your program did something unsafe while you were analysing, but they can't tell you whether the program can do something unsafe.

Since we're in a thread about security this is a crucial difference. I'm sure Amy, Bob, Charlie and Deb were able to use the new version of Puppy Simulator successfully for hours without any sign of unsafety in the sanitizer. Good to go. Unfortunately, the program was unsafe and Evil Emma had no problem finding a way to attack it. Amy, Bob, Charlie and Deb had no reason to try naming a Puppy in Puppy Simulator with 256 NUL characters, so, they didn't, but Emma did and now she's got Administrator rights on your server. Oops.

In contrast safe Rust is actually safe. Not just "It was safe in my tests" but it's just safe.

Even though it might seem like this doesn't buy you anything when of course fundamental stuff must use unsafe somewhere, the safe/unsafe boundary does end up buying you something by clearly delineating responsibility.

For example, sometimes in the Rust community you will see developers saying they had to use unsafe because, alas, the stupid compiler won't optimise the safe version of their code properly. For example it has a stupid bounds check they don't need so they used "unsafe" to avoid that. But surprisingly often, another programmer looks at their "clever" use of unsafe, and actually they did need that bounds check they got rid of, their code is unsafe for some parameters.

For example just like the C++ standard vector, Rust's Vec is a wasteful solution for a dozen integers or whatever, it does a heap allocation, it has all this logic for growing and shrinking - I don't need that for a dozen integers. There are at least two Rust "small vector" replacements. One of them makes liberal use of "unsafe" arguing that it is needed to go a little faster. The other is entirely safe. Guess which one has had numerous safety bugs.... right.

Over in the C++ world, if you do this sort of thing, the developer comes back saying duh, of course my function will cause mayhem if you give it unreasonable parameters, that was your fault - and maybe they update the documentation or maybe they don't bother. But in Rust we've got this nice clean line in the sand, that function is unsafe, if you can't do better label it "unsafe" so that it can't be called from safe code.

This discipline doesn't exist in C++. The spirit is willing but the flesh (well, the language syntax in this case) is too weak. Everything is always potentially unsafe and you are never more than one mistake from disaster.

[go to top]