zlacker

[return to "Coding assistants are solving the wrong problem"]
1. micw+wh[view] [source] 2026-02-03 07:08:33
>>jinhku+(OP)
For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

◧◩
2. mirsad+ol[view] [source] 2026-02-03 07:42:34
>>micw+wh
I use Claude Code a lot but one thing that really made me concerned was when I asked it about some ideas I have had which I am very familiar with. It's response was to constantly steer me away from what I wanted to do towards something else which was fine but a mediocre way to do things. It made me question how many times I've let it go off and do stuff without checking it thoroughly.
◧◩◪
3. physic+vl[view] [source] 2026-02-03 07:43:42
>>mirsad+ol
I've had quite a bit of the "tell it to do something in a certain way", it does that at first, then a few messages of corrections and pointers, it forgets that constraint.
◧◩◪◨
4. embedd+kJ[view] [source] 2026-02-03 10:49:34
>>physic+vl
> it does that at first, then a few messages of corrections and pointers, it forgets that constraint.

Yup, most models suffer from this. Everyone is raving about million tokens context, but none of the models can actually get past 20% of that and still give as high quality responses as the very first message.

My whole workflow right now is basically composing prompts out of the agent, let them run with it and if something is wrong, restart the conversation from 0 with a rewritten prompt. None of that "No, what I meant was ..." but instead rewrite it so the agent essentially solves it without having to do back and forth, just because of this issue that you mention.

Seems to happen in Codex, Claude Code, Qwen Coder and Gemini CLI as far as I've tested.

◧◩◪◨⬒
5. jinhku+xV3[view] [source] 2026-02-04 04:48:20
>>embedd+kJ
been experimenting with the same flow as well, it is sort of the motivation behind this project - to streamline the generate code -> detect gaps -> update spec -> implement flow.

curious to hear if you are still seeing code degradation over time?

[go to top]