zlacker

[parent] [thread] 4 comments
1. OtherS+(OP)[view] [source] 2026-02-03 15:32:54
I use Claude pretty extensively on a 2.5m loc codebase, and it's pretty decent at just reading the relevant readme docs & docstrings to figure out what's what. Those docs were written for human audiences years (sometimes decades) ago.

I'm very curious to know the size & state of a codebase where skills are beneficial over just having good information hierarchy for your documentation.

replies(2): >>pertym+Xu >>SOLAR_+t02
2. pertym+Xu[view] [source] 2026-02-03 17:37:45
>>OtherS+(OP)
Skills are more than code documentation. They can apply to anything that the model has to do, outside of coding.
3. SOLAR_+t02[view] [source] 2026-02-04 01:06:01
>>OtherS+(OP)
Claude can always self discover its own context. The question becomes whether it's way more efficient to have it grepping and lsing and whatever else it needs to do randomly poking around to build a half-baked context, or whether having a tailor made context injection that is dynamic can speed that up.

In other words, if you run an identical prompt, one with skill and one without, on a test task that requires discovering deeply how your codebase works, which one performs better on the following metrics, and how much better?

1. Accuracy / completion of the task

2. Wall clock time to execute the task

3. Token consumption of the task

replies(1): >>croon+uq3
◧◩
4. croon+uq3[view] [source] [discussion] 2026-02-04 13:13:29
>>SOLAR_+t02
It's not about one with skill and one without, but about one with skill vs one with regular old human documentation for stuff you need to know to work on a repo/project, or even more accurate comparison, take the skill and don't load it as a skill and just put it as context in the repo.

I think the main conflict in this thread is whether skills are anything more than just structuring documentation you were lacking in your repo, regardless if it was for Claude or Steve starting from scratch.

replies(1): >>SOLAR_+s47
◧◩◪
5. SOLAR_+s47[view] [source] [discussion] 2026-02-05 13:31:10
>>croon+uq3
well, the key difference is that one is auto-injected into your context for dynamic lookup and the other is loaded on-demand as needed and is contingent upon the llm discovering it.

That difference alone likely accounts for some not insignificant discrepancies. But without numbers, it's hard to say.

[go to top]