zlacker

[return to "Coding assistants are solving the wrong problem"]
1. micw+wh[view] [source] 2026-02-03 07:08:33
>>jinhku+(OP)
For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

◧◩
2. joshbe+Zk[view] [source] 2026-02-03 07:39:12
>>micw+wh
I'm in the same boat. I've been taking on much more ambitious projects both at work and personally by collaborating with LLMs. There are many tasks that I know I could do myself but would require a ton of trial and error.

I've found giving the LLMs the input and output interfaces really help keep them on rails, while still being involved in the overall process without just blindly "vibe coding."

Having the AI also help with unit tests around business logic has been super helpful in addition to manual testing like normal. It feels like our overall velocity and code quality has been going up regardless of what some of these articles are saying.

◧◩◪
3. jinhku+sY3[view] [source] 2026-02-04 05:13:59
>>joshbe+Zk
How granular do you go with the interfaces? Full function signatures + types, or more like module-level contracts.

wondering what sort of artifacts beyond ADR/natural language prompts help steer LLMs to do the right thing

[go to top]