zlacker

[parent] [thread] 0 comments
1. tshadl+(OP)[view] [source] 2026-02-04 20:10:20
> LLMs cannot offer that promise by design, so it remains your job to find and fix any deviations from the abstraction you intended.

LLMs are clumsy interns now, very leaky. But we know human experts can be leak-proof. Why can't LLMs get there, too, better at coding, understanding your intentions, reviewing automatically for deviations, etc.?

Thought experiment: could you work well with a team of human experts just below your level? Then you should be able to work well with future LLMs.

[go to top]