For other commentators, as I understand it, Chomsky's talking about well-defined grammar and language and production systems. Think Hofstadter's Godel Escher Bach. Not "folk" understanding of language.
I have no understanding or intuition, or even a finger nail grasp, for how an LLM generates, seemingly emulating, "sentences", as though created with a generative grammar.
Is any one comparing and contrasting these two different techniques? Being noob, I wouldn't even know where to start looking.
I've gleaned that someone(s) are using LLM/GPT to emit abstract syntax trees (vs a mere stream of tokens), to serve as input for formal grammars (eg programming source code). That sounds awesome. And something I might some day sorta understand.
I've also gleaned that, given sufficient computing power, training data for future LLMs will have tokenized words (vs just character sequences). Which would bring the two strategies closer...? I have no idea.
(Am noob, so forgive my poor use of terminology. And poor understanding of the tech, too.)
I don't follow. Aren't those entirely separate things? The most accurate models of anything necessarily account for the underlying mechanisms. Perhaps I don't understand what you mean by "explanatory"?
Specifically in the case of deep neural networks, we would generally suppose that it had learned to model the underlying reality. In effect it is learning the rules of a sufficiently accurate simulation.
But they don't necessarily convey understanding to humans. Prediction is not explanation.
There is a difference between Einstein's General Theory of Relativity and a deep neural network that predicts gravity. The latter is virtually useless for understanding gravity (that's even if makes better predictions).
> Specifically in the case of deep neural networks, we would generally suppose that it had learned to model the underlying reality. In effect it is learning the rules of a sufficiently accurate simulation.
No, they just fit surface statistics, not underlying reality. Many physics phenomena were predicted using theories before they were observed, they would not be in the training data even though they were part of the underlying reality.
I would dispute this claim. I would argue that as models become more accurate they necessarily more closely resemble the underlying phenomena which they seek to model. In other words, I would claim that as a model more closely matches those "surface statistics" it necessarily more closely resembles the underlying mechanisms that gave rise to them. I will admit that's just my intuition though - I don't have any means of rigorously proving such a claim.
I have yet to see an example where a more accurate model was conceptually simpler than the simplest known model at some lower level of accuracy. From an information theoretic angle I think it's similar to compression (something that ML also happens to be almost unbelievably good at). Related to this, I've seen it argued somewhere (I don't immediately recall where though) that learning (in both the ML and human sense) amounts to constructing a world model via compression and that rings true to me.
> Many physics phenomena were predicted using theories before they were observed
Sure, but what leads to those theories? They are invariably the result of attempting to more accurately model the things which we can observe. During the process of refining our existing models we predict new things that we've never seen and those predictions are then used to test the validity of the newly proposed models.