Because I can ship 2x to 5x more code with nearly the same quality.
My employer isn't paying me to be a craftsman. They're paying me to ship things that make them money.
Either way, LLMs are actually high up the quality spectrum as they generate a very consistent style of code for everyone. Which gives it uniformity, that is good when other developers have to read and troubleshoot code.
This definition limits the number of problems you can solve this way. It basically means buildup of the technical debt - good enough for throwaway code, unacceptable for long term strategy (growth killer for scale-ups).
>Either way, LLMs are actually high up the quality spectrum
This is not what I saw, it’s certainly not great. But that may depend on stack.
By the time the AI is actually writing code, I've already had it do a robust architecture evaluation and review which it documents in a development plan. I review that development plan just like I'd review another engineers dev plan. It's pretty hard for it to write objectively bad code after that step.
Also, my day to day work is in an existing code base. Nearly every feature I build has existing patterns or reference code. LLMs do extremely well when you tell them "Build X feature. [some class] provides a similar implementation. Review that before starting." If I think something needs to be DRY'd up or refactored, I ask it to do that.
I've found LLMs tend to struggle getting a codebase from 0 to 1. They tend to swap between major approaches somewhat arbitrarily.
In an existing code base, it's very easy to ground them in examples and pattern matching.