The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
We aren't anywhere near general intelligence yet.
Functionally, on many suitably scoped tasks in areas like coding and mathematics, LLMs are already superintelligent relative to most humans - which may be part of why you’re having difficulty recognizing that.