Please know that I am asking as I am curious and do not intend to be disrespectful.
The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
We aren't anywhere near general intelligence yet.
Do I know the code base like the back of my hand? Nope. Can I confidently talk to how certain functions work? Not a chance.
Can I deploy what the business wants? Yep. Can I throw error logs into LLMs and work out the cause of issues? Mostly.
I get some of you may want to go above and beyond for your company and truly create something beautiful but then guess what - That codebase is theirs. They aren't your family. Get paid and move on
Functionally, on many suitably scoped tasks in areas like coding and mathematics, LLMs are already superintelligent relative to most humans - which may be part of why you’re having difficulty recognizing that.