If AI gets to that level we will indeed have a sea change. But I think the current models, at least as far as I've seen, leave open to question whether they'll ever get there or not.
I'm not much of a user of LLMs for generating code myself, but this particular analogy isn't a great fit. The one redeeming quality is that compiler output is deterministic or at least repeatable, whereas LLMs have some randomness thrown in intentionally.
With that said, both can give you unexpected behavior, just in different ways.
Or if I'm wrong about the last bit, maybe it never was important.
Most of the people I work with, however, just understand the framework they are writing and display very little understanding or even curiosity as to what is going on beneath the first layer of abstraction. Typically this leaves them high and dry when debugging errors.
Anecdotally I see a lot more people with a shallow expertise believing the AI hype.
As a teen I used to play around with Core Wars, and my high school taught 8086 assembly. I think I got a decent grasp of it, enough to implement quicksort in 8086 while sitting through a very boring class, and test it in the simulator later.
I mean, probably few people ever need to use it for something serious, but that doesn't mean they don't understand it.
The abstraction over assembly language is solid; compilers very rarely (if at all) fail to translate high level code into the correct assembly code.
LLMs are nowhere near the level where you can have almost 100% assurance that they do what you want and expect, even with a lot of hand-holding. They are not even a leaky abstraction; they are an "abstraction" with gaping holes.
If all of mankind lost all understanding of registers overnight, it'd still affect modern programming (eventually)
Unexpected as in "I didn't know" is different than unexpected as in "I can't predict". GCC optimizations is in the former camp and if you care to know, you just need to do a deep dive in your CPU architecture and the gcc docs and codebase. LLMs is a true shot in the dark with a high chance miss and a slightly lower chance of friendly fire.
Nah, treating undefined behavior as predictable is a fool's errand. It's also a shot in the dark.
When talking about undefined behavior, the only documentation is a shrug emoticon. If you want a working program, arbitrary undocumented behavior is just as bad as incorrectly documented behavior.
And despite the triggers being documented, they're very very hard to avoid completely.
It's effectively impossible to rely on. Checking at the time of coding, or occasionally spot checking, still leaves you at massive risk of bugs or security flaws. It falls under "I can't predict".
The same thing happens with optimization. They usually warns about the gotchas, and it’s easy to check if the errors will bother you or not. You dont have to do an exhaustive list of errors when the classes are neatly defined.
But I'm talking about code that has undefined behavior. If there is any at all, you can't reliably learn what optimizations will happen or not, what will break your code or not. And you can't check for incorrect optimization in any meaningful way, because the result can change at any point in the future for the tiniest of reasons. You can try to avoid this situation, but again it's very hard to write code that has exactly zero undefined behavior.
When you talked about doing "a deep dive in your CPU architecture and the gcc docs and codebase", that is only necessary if you do have undefined behavior and you're trying to figure out what actually happens. But it's a waste of effort here. If you don't have UB, you don't need to do that. If you do have UB it's not enough, not nearly enough. It's useful for debugging but it won't predict whether your code is safe into the future.
To put it another way, if we're looking at optimizations listing gotchas, when there's UB it's like half the optimizations in the entire compiler are annotated with "this could break badly if there's UB". You can't predict it.