"the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked."
These two views are by no means mutually exclusive. I find LLMs extremely useful and still believe they are glorified Markov generators.
The take away should be that that is all you need and humans likely are nothing more than that.
But there have been many cases in my experience where the LLM could not possibly have been simply pattern-matching to something it had seen before. It really did "understand" the meaning of the code by any definition that makes sense to me.
I find it dangerous to say it "understands". People are fast to say it "is sentient by any definition that makes sense to them".
Also, would we say that a compiler "understands" the meaning of the code?