This type of thinking also follows from decades of experience.
For some reason the software engineering world largely abandoned esteem and respect for all of the above.
To imply this was a software bug is a pretty silly representation - the system was poorly engineered and didn't have proper contingencies for sensor disagreement. This is pretty clearly a design/engineering error with a software component.
Besides, the guy said "rarely ever matter" for a reason, not "explicitly never impact things"... Bit of a silly comment from you IMO
In the case of the 737MAX, the software was a design around a physical constraint; that doesn't mean the software doesn't matter. Most software is designed as a workaround of a certain physical or mental constraint.
As the amount of coordination increases, the number of failure modes tends to grow quite fast. That's why software failures in physical, safety-critical systems are not trivially corrected. There are a lot of second order effects that need to be considered.
I would love to see a day when redundancy like this is just a standardized, accepted practice rather than a stand-up debate. Easier said than done of course.
Add civil engineering to that nowadays - both buildings and roads.
Sure, there are regulations and licensing, but quite often the entity financing the whole thing cares little about such things.
Clients who want NASA quality can have it if they bring NASA budgets and timelines.
In cases where fault tolerance isn't as robust, it's for the same reasons as other disciplines you mentioned: budget and importance.
The pilot couldn't even turn MCAS off originally. That's not a software thing, that's a "who the F designed this" thing.
The main contingency with most software is that you fix it.
It’s the cost when something fucks up.
If I’m holding my phone near a cliff, and I rely on it for navigation and I’m hours from civilization, I’m a little more careful, not because I’m normally super careful. It’s because — in that specific scenario — losing my phone would cost me so much and the chance of it happening is much more likely.
Space companies spend a little extra because the cost is years of development and billions of dollar evaporating in a few seconds.
And there are software teams in certain industries that dot their I’s and cross their T’s as well.
Even on some dumb CRUD app, if it’s a critical piece of code that the rest of the software hinges upon, you spend a little extra time because the cost of fuck up is so major.
Or you’re launching a product and you have a sign up that will seed your user base, you damn well make sure it works.
It fails like buildings near fault lines, because the ground moves under them. Think broken dependencies, operating system obsolescence, et cetera.
Jokes aside I think it's mostly a value/cost thing. NASA's software has different requirements and failure scenarios than most software developers (in this context I will not call them software engineers) have to care about. Verifiable correctness is harder to predict, and in most devs' roles it's easier to just try something and see what happens, rather than know what'll happen up front.
My impression from friends working in other engineering disciplines is that software engineering works similarly to other fields: the more risk to human lives is involved, the more testing, redundancy, etc is involved.
An apropos and famous example is the Ariane 5 rocket mishap. The same validated software from the Ariane 4 was used, but the hardware design changed. Specifically, the velocity of the Ariane 5 exceeded that of its predecessor and exceeded the 16-bit variable used.
Standardisation - in the big 'E' Engineering world, there would be a recognised international standard for Web Apps that ensured/enforced that all Web Apps supported this functionality, or they would not be approved for use.
Another factor is Accountability. A senior Software 'Engineer' would have to take personal responsibility (liability, accountability) that the software product they are producing and/or overseeing met all these requirements and personally sign off that these standards have been met. If the product were to fail at any point and it was determined that the cause was negligence in following the standard, any damages sought (not common, but not unheard of) would ultimately find their way to the accountable individual and their insurance.
In cases where budgets/importance don't allow for this level of scrutiny, there would still be paperwork signed by the producer of the software and the client acknowledging deviation from the standard and waiving any recourse for doing so.
it's a lower bar for entry, any kid can run a compiler -- it's harder to acquire a bulldozer and planning permits.
similarly if you look at 'diy' or 'garage' engineering you can find all sorts of hazardous/poorly-built/never-should-have-existed death traps. How many people have died in recent years from fractal burning?
it's still engineering -- they're building their own tools -- but it's within a realm (DIY/maker) that historically has undersold the dangers inherent with the things.
Why? Mostly because they're self-taught, mentorless, and without the direction within their education to be taught the importance of engineering rigor, similar to the kid given the compiler who starts making forkbombs and goofy dialogs to prank their friends.
there is totally standardization. At the building block level. TCP/IP, Protocols on top of that, language standards etc.
Web Apps are complex, why would there be a standard? Just like there's no standard for cars, other than for some of their components like wheels or headlights.
See the Australian Design Rules (which happens to form the basis of most UNECE and Canadian transport regulations) if you want to see how detailed they are.
That's what happens when there's no liability. Even if you're a billion dollar corporation, you can just slap some standard legal boilerplate disclaimer on the license agreement and absolve yourself of all responsibility even when hackers pwn the private information of your customers. They can't complain since technically the contract says there were no guarantees.
Some version if this legal boilerplate can be found in virtually all software licenses, including open source ones:
THE SOFTWARE IS PROVIDED "AS IS",
WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT.
IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING
THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY,
WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
Start holding developers and especially corporations directly accountable and liable for any problems and I guarantee they'll start respecting it the very next second.To continue the analogy from earlier - standards wouldn't mean all web applications would have to be designed, programmed and work exactly the same way, but it would mean that they would need to be formally tested (to an approved test plan), and to use your example, would need to demonstrate that each of those layers of fallbacks (as dictated by the standard and covered in the test plan) operate correctly in order to be certified.
If anything, I think software has a huge advantage over physical world engineering in that testing can be replicated at virtually no cost whenever a change is made to the design. I shudder to think how many cars get trashed in order to meet vehicle safety testing requirements.
Here is the Australian Standard for Caravan and light trailer towing components, Part 1: Towbars and towing brackets
https://store.standards.org.au/product/as-4177-1-2004
There are thousands of these documents covering everything to do with transport from the vehicles to the reflectivity of street signs.
The regulation (at least in my state) is that only engineers who are registered as Registered Engineers are permitted to carry out professional engineering services in this state.