zlacker

[return to "Chomsky on what ChatGPT is good for (2023)"]
1. next_x+Y2[view] [source] 2025-05-25 17:32:06
>>mef+(OP)
Chomsky is always saying that LLMs and such can only imitate, not understand language. But I wonder if there is a degree of sophistication at which he would concede these machines exceed "imitation". If his point is that LLMs arrive at language in a way different than humans... great. But I'm not sure how he can argue that some kind of extremely sophisticated understanding of natural language is not embedded in these models in a way that, at this point, exceeds the average human. In all fairness, this was written in 2023, but given his longstanding stubbornness on this topic, I doubt it would make a difference.
◧◩
2. mattne+D4[view] [source] 2025-05-25 17:43:40
>>next_x+Y2
I think what would "convince" Chomsky is more akin to the explainability research currently in it's infancy, producing something akin to a branch of information theory for language and thought.

Chomsky talks about how the current approach can't tell you about what humans are doing, only approximate it; the example he has given in the past is taking thousands of hours of footage of falling leaves and then training a model to make new leaf falling footage versus producing a model of gravity, gas mechanics for the air currents, and air resistance model of leaves. The later representation is distilled down into something that tells you about what is happening at the end of some scientific inquiry, and the former is a opaque simulation for engineering purposes if all you wanted was more leaf falling footage.

So I interpret Chomsky as meaning "Look, these things can be great for an engineering purpose but I am unsatisfied in them for scientific research because they do not explain language to me" and mostly pushing back against people implying that the field he dedicated much of his life to is obsolete because it isn't being used for engineering new systems anymore, which was never his goal.

[go to top]