I had a kind of visceral distaste for all of this rules, skills etc stuff when I first heard about it for similar reasons. This generalized text model can speak base64 encoded Klingon but a readme.md isn’t good enough? However given the reality of limited context windows, current models can’t consider everything in a big repo all at once and keep coherent. Attaching some metadata to the information that tells the model when and how to consider it (and assisting the models with tooling written in code to provide the context at the right time) seems to make a big difference in practice.