Amplifiers, rather than replacements. I think the community at large still thinks LLMs and agents are gonna be "replacing" knowledge, which I think is far from the truth.
I agree however on the point that no prior software engineering skills would make this much more difficult.
So the first day or two, each change takes 20-30 minutes. Next day it takes 30-40 minutes per change, next day up to an hour and so on, as the requirements start to interact with each other, together with the ball of spaghetti they've composed and are now trying to change without breaking other parts.
Contrast that with when you really own the code and design, then you can keep going for weeks, all changes take 20-30 minutes, as at day one. But also means I'm paying attention to what's going on, so no vibe-coding, but pair programming with LLMs, and also requires you to understand both the domain, what you're actually aiming for and the basics of design/architecture.
I built other things too which would not be considered trivial or "simple", or as you say they're architecturally complex, and they involve very domain specific knowledge about programming languages, compilers, ASTs, databases, high-performance optimizations, etc. And for a long time, or shall I say never, have I felt this productive tbh. If I were to setup a company around this, which I believe I could, in pre-LLM era I'd quite literally have to hire 3-5 experienced engineers with sufficient domain expertise to build this together with me - and I mean not in every possible potential but the concrete work I've done in ~2 weeks.
I feel like you have missed emsh's point which is that AI agents significantly become muddled up if your project's complex.
I feel the same way personally. If I don't know how the AI code interacts with each other, I feel a frustration as long as the project continues precisely because of the fact that they mention about first taking less time and then taking longer and longer time having errors which it missed etc.
I personally vibe code projects too but I will admit that there is this error.
I have this feeling that anything really complex will fall heels first if complexity really grows a lot or you don't unclog the slop.
This is also why we are seeing "AI slop janitors" humans whose task is to unsloppify the slop.
Personally I have this intution that AI will create really good small products, there is no denying in that, but those were already un-monetizable or if they were, then even in the past, they were really easy to replicate, this probably just lowered the friction
Now if your project is osmething commercial and large, I don't know how much AI slop can people trust. At some point if people depend on your project which is having these issues because people can understand if the project's AI generated or not, then that would have it issues too.
And I am speaking this from experience after building something like whmcs in golang in AI. At first, I am surprised and I feel as if its good enough for my own personal use case (gvisor) and maybe some really small providers. But when I want it to say hook to proxmox, have the tmate server be connected with an api to allow re-opening easier, have the idea of live migration from one box to another etc., create drivers for the custom firecrackers-ssh idea that I implemented once again using AI.
One can realize how quickly complexity adds in projects and how as emsh's points out that it becomes exponentially harder to use AI.