zlacker

[return to "Coding assistants are solving the wrong problem"]
1. micw+wh[view] [source] 2026-02-03 07:08:33
>>jinhku+(OP)
For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

◧◩
2. bandra+wH[view] [source] 2026-02-03 10:31:58
>>micw+wh
Huh. I'm extremely skeptical of AI in areas where I don't have expertise, because in areas where I do have expertise I see how much it gets wrong. So it's fine for me to use it in those areas because I can catch the errors, but I can't catch errors in fields I don't have any domain expertise in.
◧◩◪
3. perryg+223[view] [source] 2026-02-03 22:18:56
>>bandra+wH
I feel the same way. LLMs errors sound most plausible to those who know least.

On complex topics where I know what I'm talking about, model output contains so much garbage with incorrect assumptions.

But complex topics where I'm out of my element, the output always sounds strangely plausible.

This phenomenon writ large is terrifying.

[go to top]