Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?
I'd imagine automated code smell checking tools can only cover so much at least.
* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).
We do the work that’s too large in scope for other teams to handle, and clearly documenting and enforcing best practices is one component of that. Part of that is maintaining a comprehensive linting suite, and the other part is writing documentation and educating developers. We also maintain core libraries and APIs, so if we notice many teams are doing the same thing in different ways, we’ll sit down and figure out what we can build that’ll accommodate most use cases.