A lot of modern userspace code, including Rust code in the standard library, thinks that invariant failures (AKA "programmer errors") should cause some sort of assertion failure or crash (Rust or Go `panic`, C/C++ `assert`, etc). In the kernel, claims Linus, failing loudly is worse than trying to keep going because failing would also kill the failure reporting mechanisms.
He advocates for a sort of soft-failure, where the code tells you you're entering unknown territory and then goes ahead and does whatever. Maybe it crashes later, maybe it returns the wrong answer, who knows, the only thing it won't do is halt the kernel at the point the error was detected.
Think of the following Rust API for an array, which needs to be able to handle the case of a user reading an index outside its bounds:
struct Array<T> { ... }
impl<T> Array<T> {
fn len(&self) -> usize;
// if idx >= len, panic
fn get_or_panic(&self, idx: usize) -> T;
// if idx >= len, return None
fn get_or_none(&self, idx: usize) -> Option<T>;
// if idx >= len, print a stack trace and return
// who knows what
unsafe fn get_or_undefined(&self, idx: usize) -> T;
}
The first two are safe by the Rust definition, because they can't cause memory-unsafe behavior. The second two are safe by the Linus/Linux definition, because they won't cause a kernel panic. If you have to choose between #1 and #3, Linus is putting his foot down and saying that the kernel's answer is #3.The fact that arbitrary programs are undecidable is a red herring here.
I would like to learn otherwise, but even a React JS+HTML page is undecidable... its scope is limited by chrome V8 js engine (like a vm), but within that scope I don't think you can prove anything more. otherwise we could just make static analysis to check if it will leak passwords...
Depending on the semantic property to check for, writing such an algorithm isn’t trivial. But the Rust compiler for example does it for memory safety, for the subset of valid Rust programs that don’t use Unsafe.
The only sure way I can think of, is when you force your program to go through a more narrow non-turing algorithm. Like sending data through a network after Serialization. Where we could limit the De-Serialization process to be non Turing (json, yaml?).
Same for code, that uses non-turing API, like memory allocation in a dedicated per process space. Or rust "borrow" mechanics that the compiler enforces.
But my point is, everyday program are "arbitrary program" and not a red haring. Surly from the kernel perspective, which is Linus point imo.
Regarding the second question, in the general case you have to guess or think hard, and proceed by trial and error. You notice that the analyzer takes more time than you’re willing to wait, so you stop it and try to change your program in order to fix that problem.
We already have that situation today, because the Rust type system is turing-complete. Meaning, the Rust compiler may in principle need an infinite amount of time to type-check a program. Normally the types used in actual programs don’t trigger that situation (and the compiler also may first run out of memory).
By the way, even if Rust’s type system wasn’t turing-complete, the kind of type inference it uses takes exponential time, which in practice is almost the same as the possibility of non-halting cases, because you can’t afford to wait a hundred or more years for your program to finish compiling.
> But my point is, everyday program are "arbitrary program"
No, most programs we write are from a very limited subset of all possible programs. This is because we already reason in our heads about the validity and suitability of our programs.