zlacker

[return to "Coding assistants are solving the wrong problem"]
1. bambax+Bn[view] [source] 2026-02-03 07:59:37
>>jinhku+(OP)
> Unlike their human counterparts who would and escalate a requirements gap to product when necessary, coding assistants are notorious for burying those requirement gaps within hundreds of lines of code

This is the kind of argument that seems true on the surface, but isn't really. An LLM will do what you ask it to do! If you tell it to ask questions and poke holes into your requirements and not jump to code, it will do exactly that, and usually better than a human.

If you then ask it to refactor some code, identify redundancies, put this or that functionality into a reuseable library, it will also do that.

Those critiques of coding assistants are really critiques of "pure vibe coders" who don't know anything and just try to output yet another useless PDF parsing library before they move on to other things.

◧◩
2. sothat+PO[view] [source] 2026-02-03 11:32:29
>>bambax+Bn
It’s like in Anthropic’s own experiment. People who used AI to do their work for them did worse than the control group. But people who used AI to help them understand the problem, brainstorm ideas, and work on their solution did better.

The way you approach using AI matters a lot, and it is a skill that can be learned.

[go to top]