zlacker

[return to "Coding assistants are solving the wrong problem"]
1. micw+wh[view] [source] 2026-02-03 07:08:33
>>jinhku+(OP)
For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

◧◩
2. mirsad+ol[view] [source] 2026-02-03 07:42:34
>>micw+wh
I use Claude Code a lot but one thing that really made me concerned was when I asked it about some ideas I have had which I am very familiar with. It's response was to constantly steer me away from what I wanted to do towards something else which was fine but a mediocre way to do things. It made me question how many times I've let it go off and do stuff without checking it thoroughly.
◧◩◪
3. ozlike+4n[view] [source] 2026-02-03 07:55:11
>>mirsad+ol
Call me a conspiracy theorist, and granted much of this could be attributed to the fact that the majority of code in existence is shit, but im convinced that these models are trained and encouraged to produce code that is difficult for humans to work on. Further driving and cementing the usage of then when you inevitably have to come back and fix it.
◧◩◪◨
4. except+dp[view] [source] 2026-02-03 08:11:26
>>ozlike+4n
I don't think they would be able to have an LLM withouth the flaws. The problem is that an LLM cannot make a distinction between sense and nonsense in the logical way. If you train an LLM on a lot of sensible material, it will try to reproduce it by matching training material context and prompt context. The system does not work on the basis of logical principles, but it can sound intelligent.

I think LLM producers can improve their models by quite a margin if customers train the LLM for free, meaning: if people correct the LLM, the companies can use the session context + feedback to as training. This enables more convincing responses for finer nuances of context, but it still does not work on logical principles.

LLM interaction with customers might become the real learning phase. This doesn't bode well for players late in the game.

◧◩◪◨⬒
5. andrek+UFa[view] [source] 2026-02-06 00:37:56
>>except+dp

  > if people correct the LLM, the companies can use the session context + feedback to as training.
it definitely seems that way; just the other day coderabbit was asking me where i found x when when it told me x didn't exist...

  > LLM interaction with customers might become the real learning phase.
sometimes i wonder why i pay for this if i'm supposed to train this thing...
[go to top]