Maybe my misunderstanding comes from my ignorance of the kernel's architecture, but surely there's a way to segregate operations in logical fallible tasks, so that a failure inside of a task aborts the task but doesn't put down the entire thing, and in particular not a sensitive part like kernel error reporting? Or are we talking about panics in this sensitive part?
Bubbling up errors in fallible tasks can be implemented using panic by unwrapping up to the fallible task's boundary.
To my understanding this is exactly what any modern OS does with user space processes?
I always have the hardest of time in discussions with people advocating for or against that "you should stop computations on an incorrect result". Which computations should you stop? Surely, we're not advocating for bursting the entire computer into flames. There has to be a boundary. So, my take is to start defining the boundaries, and yes, to stop computations up to these boundaries.
Things like "kernel error reporting" doesn't exist as discrete element. Sure, you might decide to stop everything and only dump log onto earlycon, but running with serial cable to every system that crashed would be rather annoying. For all kernel knows, the only way to get something to the outside world might be through USB Ethernet adapter and connection that is tunneled by userspace TUN device, at which point essentialy whole kernel must continue to run.