This is true in a sense, but every little papercut at the lower levels of abstraction degrades performance at higher levels as the LLM needs to spend its efforts on hacking around jank in the Python interpreter instead of solving the real problem.
Reminds of evolutionary debate. Whats important is just because something can learn to adapt doesnt mean theyll find an optimized adaption, nor will they continually refine it.
As far as i can tell AI will only solve problems well where the problem space is properly defined. Most people wont know how to do that.