I also think they tend to be the older ones among us who have seen what happens when it all goes wrong, and the stack comes tumbling down, and so want to make sure you don't end up in that position again. Covers all areas of IT from Cyber, DR, not just software.
When I have moved between places, I always try to ensure we have a clear set of guidelines in my initial 90-day plan, but it all comes back to the team.
It's been 50/50: some teams are desperate for any change, and others will do everything possible to destroy what you're trying to do. Or you have a leader above who has no idea and goes with the quickest/cheapest option.
The trick is to work this out VERY quickly!
However, when it does go really wrong, I assume most have followed the UK Post Office saga in the UK around the software bug(s) that sent people to prison, suicides, etc. https://en.wikipedia.org/wiki/British_Post_Office_scandal
I am pretty sure there would have been a small group (or at least one) of tech people in there who knew all of this and tried to get it fixed, but were blocked at every level. No idea - but suspect.
To the great surprise of my younger self I have never seen “it all come crashing down” and I honestly believe this is extremely rare in practice (i.e. the U.K post saga), something that senior devs like to imagine will happen but probably won’t, and is used to scare management and junior devs into doing “something” which may or may not make things better.
Almost universally I’ve seen the software slowly improved via a stream of urgent bug fixes with a sprinkle of targeted rewrites. The ease of these bug fixes and targeted rewrites essentially depends on whether there is a solid software design underneath: Poor designs tend to be unfixable and have complex layers of patches to make the system work well enough most of the time; good designs tend to require less maintenance overall. Both produce working software, just with different “pain” levels.
Sometimes people make such a big mess you have to burn it down and start over.