Frameworks and compilers are designed to be leak-proof abstractions. Any way in which they deviate from their abstract promise is a bug that can be found, filed, and permanently fixed. You get to spend your time and energy reasoning in terms of the abstraction because you can trust that the finished product works exactly the way you reasoned about at the abstract level.
LLMs cannot offer that promise by design, so it remains your job to find and fix any deviations from the abstraction you intended. If you fell short of finding and fixing any of those bugs, you've just left yourself a potential crisis down the line.
[Aside: I get why that's acceptable in many domains, and I hope in return people can get why it's not acceptable in many other domains]
All of our decades of progress in programming languages, frameworks, libraries, etc. has been in trying to build up leak-proof abstractions so that programmer intent can be focused only on the unique and interesting parts of a problem, with the other details getting the best available (or at least most widely applicable) implementation. In many ways we've succeeded, even though in many ways it looks like progress has stalled. LLMs have not solved this, they've just given up on the leak-proof part of the problem, trading it for exactly the costs and risks the industry was trying to avoid by solving it properly.
If my C compiler sometimes worked and sometimes didn't I would just mash compile like an ape until it started working.
LLMs are clumsy interns now, very leaky. But we know human experts can be leak-proof. Why can't LLMs get there, too, better at coding, understanding your intentions, reviewing automatically for deviations, etc.?
Thought experiment: could you work well with a team of human experts just below your level? Then you should be able to work well with future LLMs.