That analogy only holds if LLMs can solve novel problems that can be proven to not exist in any form in their training material.
Granted, for most language and programming tasks, you don’t need the latter, only the former.
It may appear that they are solving novel problems but given the size of their training set they have probably seen them. There are very few questions a person can come up with that haven't already been asked and answered somewhere.
You can see this in riddles that are obviously in the training set, but older or lighter models still get them wrong. Or situations where the model gets them right, but uses a different method than the ones used in the training set.
It's famously easier to impress people with soft-sciences speculation than it is to impress the rules of math or compilers.