zlacker

[parent] [thread] 0 comments
1. lxgr+(OP)[view] [source] 2026-02-03 15:44:19
Yes, pretty much.

LLM-powered agents are surprisingly human-like in their errors and misconceptions about less-than-ubiquitous or new tools. Skills are basically just small how-to files, sometimes combined with usage examples, helper scripts etc.

[go to top]