zlacker

[return to "Agent Skills"]
1. Soeren+1c[view] [source] 2026-02-03 15:10:01
>>moored+(OP)
The observation about agents not using skills without being explicitly asked resonates. In practice, I've found success treating skills as explicit "workflows" rather than background context.

The pattern that works: skills that represent complete, self-contained sequences - "do X, then Y, then Z, then verify" - with clear trigger conditions. The agent recognizes these as distinct modes of operation rather than optional reference material.

What doesn't work: skills as general guidelines or "best practices" documents. These get lost in context or ignored entirely because the agent has no clear signal for when to apply them.

The mental model shift: think of skills less like documentation and more like subroutines you'd explicitly invoke. If you wouldn't write a function for it, it probably shouldn't be a skill.

◧◩
2. smithk+8f[view] [source] 2026-02-03 15:24:04
>>Soeren+1c
That does raise the question of what the value is of a "skill" vs a "command". Claude Code supports both, and it's not entirely clear to me when we should use one vs the other - especially if skills work best as, well, commands.
◧◩◪
3. opencl+yW1[view] [source] 2026-02-03 23:01:15
>>smithk+8f
The practical distinction I've found: commands are atomic operations (lint, format, deploy), while skills encode multi-step decision trees ("implement feature X" which might involve reading context, planning, editing multiple files, then validating).

For context window management, skills shine when you need progressive disclosure - load only the metadata initially, then pull in the full instructions when invoked. This matters when you have 20+ capabilities competing for limited context.

That said, the 56% non-invocation rate mentioned elsewhere in this thread suggests the discovery mechanism needs work. Right now "skill as a fancy command" may be the only reliable pattern.

[go to top]