LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.
I'm going to give an example of a software with multiple processes.
Humans can imagine scenarios where a process can break. Claude can also do it, but only when the breakage happens from inside the process and if you specify it. It can not identify future issues from a separate process unless you specifically describe that external process, the fact that it could interact with our original process and the ways in which it can interact.
Identifying these are the skills of a developer, you could say you can document all these cases and let the agent do the coding. But here's the kicker, you only get to know these issues once you started coding them by hand. You go through the variables and function calls and suddenly remember a process elsewhere changes or depends on these values.
Unit tests could catch them in a decently architected system, but those tests needs to be defined by the one coding it. Also if the architect himself is using AI, because why not, it's doomed from the start.