zlacker

[parent] [thread] 1 comments
1. iainme+(OP)[view] [source] 2026-02-04 08:03:56
Hmm, that’s a good question! I think a bit of both.

In terms of experience, I’ve noticed that agents don’t always use skills the way you want; and I’ve noticed that they’re pretty good at browsing existing code and docs and figuring things out for themselves.

Is this an example of “the bitter lesson”? That’s conjecture, but I think pretty well-founded.

It could well be that specific formats for skills work better because the agents are trained on those specific formats. But if so, I think it’s just a local maximum.

replies(1): >>ashdks+MW1
2. ashdks+MW1[view] [source] 2026-02-04 19:22:34
>>iainme+(OP)
I had a kind of visceral distaste for all of this rules, skills etc stuff when I first heard about it for similar reasons. This generalized text model can speak base64 encoded Klingon but a readme.md isn’t good enough? However given the reality of limited context windows, current models can’t consider everything in a big repo all at once and keep coherent. Attaching some metadata to the information that tells the model when and how to consider it (and assisting the models with tooling written in code to provide the context at the right time) seems to make a big difference in practice.
[go to top]