This is the sort of absolutism that is so pointless.
At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.
The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.
[0] https://arstechnica.com/information-technology/2021/01/hacke...
I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.
There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.
Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.
The Mozilla case study is not a real world study. It simply looks at the types of bugs that existed and says "I promise these wouldn't have existed if we had used Rust". Would Rust have introduced new bugs? Would there be an additional cost to using Rust? We don't know and probably never will. What we care about is preventing real world damage. Does Rust prevent real world damage? We have no idea.