The comment seemed to be making reference to rust's safety guarantees about undefined behaviour like use after free.
Linus' seems to have a completely different definition of "safey" that conflates allocation failures, indexing out of bounds, and division by zero with memory safety. Rust makes no claims about those problems, and the comment clearly refers to undefined behaviour. Obviously, those other problems are real problems, but just not ones that Rust claims to solve.
Edit: Reading the chain further along, it increasingly feels like Linus is aruging against a strawman.
You should read the email thread, as Linhas explains in clear terms.
Take for instance Linus's insightful followup post:
If his initial interpretation and expectation of the Rustacean response is in fact correct, the line of argumentation does not seem per se wrong, but I do think that it is bad practice in adversarial conversations to do the thing where you silently skip forward several steps in the argument and respond to what you expect (their response to your response)^n to be instead of the immediate argument at hand.
So I can understand where Linus comes from.
But to restate briefly, the answer varies wildly between kernel and user programs, because a user program failing hard on corrupt state is still able to report that failure/bug, whereas a kernel panic is a difficult to report problem (and breaks a bunch of automated reporting tooling).
So in answer: Read the discussion.
Engineers should want the immediate stop, because that's safer, especially in safety critical situations.
The rules of arugment existed long before the linux kernel. You don't get to change terms introduced within a arugment with a clear meaning because it helps you create a strawman. If you want to change the definition of a term mid arugment, you telegraph it. Once again, this is called conflation.
Thats not the issue though. It's that "safe" means something is actualy safe. My house isn't safe if its on fire, even if the house is in a safe neighborhood. Linus' claim is that "rust people" sometimes themselves conflate memory saftey with general code saftey, simply because "safe" is in the name. So much so that they will at times sacrifice code quality to achieve this goal despite (a) memory saftey not being real saftey and (b) there is no way to guarantee memory saftey in the kernel anyway. What he is saying is that "rust people" (whatever that means) are at times trading off real saftey or real code maintenance/performance for "rust saftey."
>a compiler - or language infrastructure - that says "my rules are so ingrained that I cannot do that" is not one that is valid for kernel work.
And
>I think you are missing just how many things are "unsafe" in certain contexts and cannot be validated.
>This is not some kind of "a few special things".
>This is things like absolutely _anything_ that allocates memory, or takes a lock, or does a number of other things.
>Those things are simply not "safe" if you hold a spinlock, or if you are in a RCU read-locked region.
>And there is literally no way to check for it in certain configurations. None.
You can judge wheter he is correct but he never said rust's saftey implies absolute saftey, only that some rust users are treating it that way by sacrificing the code for it. If that's the case then it makes a lot of sense to start using a more sensible word like "guaranteed" instead of safe. I think part of what contibutes to this idea is that "unsafe" code is written with the keyword "unsafe" as if code written not that way is safe, and code written with "unsafe" is bad. That's not to say that "unsafe" actually implies any of that - all it means is that it's not guaranteed to be memory safe - but according to Linus it creates a certain mentality which is incongruent with the nature of kernel development. And the reason for that is that safe and unsafe are general english words with strong connotations such as:
>protected from or not exposed to danger or risk; not likely to be harmed or lost.
>uninjured; with no harm done.
And for unsafe:
>able or likely to cause harm, damage, or loss
1) needing to reload a wifi driver to reinitialize hardware (with a tiny probability of memory corruption) OR choosing to reboot as soon as convenient (with a tiny probability of corrupting the latest saved files)
2) to lose unsaved files for sure and not even know what caused the crash
Real engineers, like say the people who code the machines that fly in mars, don't want "oops that's unexpected, ruin the entire mission because that's safer". Same for the Linux kernel.
Safety critical systems will try to recover to a working state as much as possible. It is designed with redundancy that if one path fails, it can use path 2 or path 3 towards a safe usable state.
As I've said over and over, both approaches - "limp along" and "reboot before causing harm" - need to remain options, for different scenarios. Anyone who treats the one use case they're familiar with as the only one which should drive policy for everyone is doing the community a disservice.
The other half is that kernel has a lot of rules of what is safe to be done where, and Rust has to be able to follow those rules, or not be used in those contexts. This is the GFP_ATOMIC part.
Am I?
You suppose a lot of things about me from literaly a bunch of words.
"A 'tiny probability of memory corruption' can easily become a CVE" is still FUD, because is simply not true in most cases. The words "tiny" and "easily" show the bias here.
The rest of the conversation seems a symptom of Hypervigilance: Fixation on potential threats (dangerous people, animals, or situations).
Fortunately, the decision isn't up to you either.